Publications

Displaying 1 - 100 of 406
  • Aleman, A., Formisano, E., Koppenhagen, H., Hagoort, P., De Haan, E. H. F., & Kahn, R. S. (2005). The functional neuroanatomy of metrical stress evaluation of perceived and imagined spoken words. Cerebral Cortex, 15(2), 221-228. doi:10.1093/cercor/bhh124.

    Abstract

    We hypothesized that areas in the temporal lobe that have been implicated in the phonological processing of spoken words would also be activated during the generation and phonological processing of imagined speech. We tested this hypothesis using functional magnetic resonance imaging during a behaviorally controlled task of metrical stress evaluation. Subjects were presented with bisyllabic words and had to determine the alternation of strong and weak syllables. Thus, they were required to discriminate between weak-initial words and strong-initial words. In one condition, the stimuli were presented auditorily to the subjects (by headphones). In the other condition the stimuli were presented visually on a screen and subjects were asked to imagine hearing the word. Results showed activation of the supplementary motor area, inferior frontal gyrus (Broca's area) and insula in both conditions. In the superior temporal gyrus (STG) and in the superior temporal sulcus (STS) strong activation was observed during the auditory (perceptual) condition. However, a region located in the posterior part of the STS/STG also responded during the imagery condition. No activation of this same region of the STS was observed during a control condition which also involved processing of visually presented words, but which required a semantic decision from the subject. We suggest that processing of metrical stress, with or without auditory input, relies in part on cortical interface systems located in the posterior part of STS/STG. These results corroborate behavioral evidence regarding phonological loop involvement in auditory–verbal imagery.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children's syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition, 102, 16-48. doi:10.1016/j.cognition.2005.12.006.

    Abstract

    Different languages map semantic elements of spatial relations onto different lexical and syntactic units. These crosslinguistic differences raise important questions for language development in terms of how this variation is learned by children. We investigated how Turkish-, English-, and Japanese-speaking children (mean age 3;8) package the semantic elements of Manner and Path onto syntactic units when both the Manner and the Path of the moving Figure occur simultaneously and are salient in the event depicted. Both universal and language-specific patterns were evident in our data. Children used the semantic-syntactic mappings preferred by adult speakers of their own languages, and even expressed subtle syntactic differences that encode different relations between Manner and Path in the same way as their adult counterparts (i.e., Manner causing vs. incidental to Path). However, not all types of semantics-syntax mappings were easy for children to learn (e.g., expressing Manner and Path elements in two verbal clauses). In such cases, Turkish- and Japanese-speaking children frequently used syntactic patterns that were not typical in the target language but were similar to patterns used by English-speaking children, suggesting some universal influence. Thus, both language-specific and universal tendencies guide the development of complex spatial expressions.
  • Ameka, F. K. (2007). The coding of topological relations in verbs: The case of Likpe (SEkpEle). Linguistics, 45(5), 1065-1104. doi:10.1515/LING.2007.032.

    Abstract

    This article examines the grammar, use and meaning of fifteen verbs used in the Basic Locative Construction (BLC) of Likpe — a Ghana-Togo-Mountain language. The verbs fall into four semantic subclasses: (a) basic topological relations: t 'be.at', tk 'be.on', kpé 'be.in', and fi 'be.near'; (b) postural verbs: sí 'sit', ny 'stand', fáka 'hang', yóma 'hang', kps 'lean', fus 'squat', and labe 'lie'; (c) “distribution” verbs: kpó 'be spread, heaped,' and tí 'be covered'; and (d) “adhesion” verbs: má 'be griped, be fixed', mánkla 'be stuck to'. Likpe locative predications reflect an ontological commitment to the overall topological relation between Figure and Ground and are not focused just on the Figure or the Ground. Various factors determine the choice of “competing” verbs for particular scenarios: animacy, nonindividuation of the Figure, permanency of the configuration and the speaker's desire to be referentially precise or to present stereotypical information. It is demonstrated that in situations where there is a choice, speakers tend to use the more general verbs (stereotype information). The implications of this tendency for the development of a language from a multiverb language using several verbs (e.g., 15) in its BLC to using only a small-set of verbs in its BLC, just as some of Likpe's neighbors have done, are considered.
  • Ameka, F. K. (1987). A comparative analysis of linguistic routines in two languages: English and Ewe. Journal of Pragmatics, 11(3), 299-326. doi:10.1016/0378-2166(87)90135-4.

    Abstract

    It is very widely acknowledged that linguistic routines are not only embodiments of the sociocultural values of speech communities that use them, but their knowledge and appropriate use also form an essential part of a speaker's communicative/pragmatic competence. Despite this, many studies concentrate more on describing the use of routines rather than explaining the socio-cultural aspects of their meaning and the way they affect their use. It is the contention of this paper that there is the need to go beyond descriptions to explanations and explications of the use and meaning of routines that are culturally and socially revealing. This view is illustrated by a comparative analysis of functionally equivalent formulaic expressions in English and Ewe. The similarities are noted and the differences explained in terms of the socio-cultural traditions associated with the respective languages. It is argued that insights gained from such studies are valuable for crosscultural understanding and communication as well as for second language pedagogy.
  • Ameka, F. K., & Essegbey, J. (2007). Cut and break verbs in Ewe and the causative alternation construction. Cognitive Linguistics, 18(2), 241-250. doi:10.1515/COG.2007.011.

    Abstract

    Ewe verbs covering the cutting and breaking domain divide into four morpho-syntactic classes that can be ranked according to agentivity. We demonstrate that the highly non-agentive break verbs participate in the causative-inchoative alternation while the highly agentive cut verbs do not, as expected from Guerssel et al.'s (1985) hypothesis. However, four verbs tso 'cut with precision', 'cut', 'snap-off', and dze 'split', are used transitively when an instrument is required for the severance to be effected, and intransitively when not. We reject a lexicalist analysis that would postulate polysemy for these verbs and argue for a construction approach.
  • Ameka, F. K., & Levinson, S. C. (2007). Introduction-The typology and semantics of locative predicates: Posturals, positionals and other beasts. Linguistics, 45(5), 847-872. doi:10.1515/LING.2007.025.

    Abstract

    This special issue is devoted to a relatively neglected topic in linguistics, namely the verbal component of locative statements. English tends, of course, to use a simple copula in utterances like “The cup is on the table”, but many languages, perhaps as many as half of the world's languages, have a set of alternate verbs, or alternate verbal affixes, which contrast in this slot. Often these are classificatory verbs of 'sitting', 'standing' and 'lying'. For this reason, perhaps, Aristotle listed position among his basic (“noncomposite”) categories.
  • Ameka, F. K., & Dorvlo, K. (2007). The Ewe language. Verba Africana series - Video documentation and Digital Materials, 1.
  • Andrieu, C., Figuerola, H., Jacquemot, E., Le Guen, O., Roullet, J., & Salès, C. (2005). Parfum de rose, odeur de sainteté: Un sermon Tzeltal sur la première sainte des Amériques. Ateliers du LESC, 29, 11-67. Retrieved from http://ateliers.revues.org/document174.html.
  • Baayen, R. H., & Moscoso del Prado Martín, F. (2005). Semantic density and past-tense formation in three Germanic languages. Language, 81(3), 666-698. doi:10.1353/lan.2005.0112.

    Abstract

    it is widely believed that the difference between regular and irregular verbs is restricted to form. This study questions that belief. We report a series of lexical statistics showing that irregular verbs cluster in denser regions in semantic space. Compared to regular verbs, irregular verbs tend to have more semantic neighbors that in turn have relatively many other semantic neighbors that are morphologically irregular. We show that this greater semantic density for irregulars is reflected in association norms, familiarity ratings, visual lexical-decision latencies, and word-naming latencies. Meta-analyses of the materials of two neuroimaging studies show that in these studies, regularity is confounded with differences in semantic density. Our results challenge the hypothesis of the supposed formal encapsulation of rules of inflection and support lines of research in which sensitivity to probability is recognized as intrinsic to human language.
  • Baayen, H., Levelt, W. J. M., Schreuder, R., & Ernestus, M. (2007). Paradigmatic structure in speech production. Proceedings from the Annual Meeting of the Chicago Linguistic Society, 43(1), 1-29.

    Abstract

    The main goal of the present study is to trace the consequences of local and global markedness for the processing of singular and plural nouns. Decompositional models such as proposed by (Pinker (1997); Pinker (1999)) and (Levelt et al. (1999)) predict a lexeme frequency effect and no effects of the frequencies of the singular and the plural forms. Experiments 1 and 4 reveal the expected lexeme frequency effect. Furthermore, in these experiments there are no clear independent effects of the frequencies of the inflected forms. However, the effects of Entropy and Relative Entropy that emerge from these experiments show that in production knowledge of the probabilities of the individual inflected forms do play a role, albeit indirectly. These entropy effects bear witness to the importance of paradigmatic organization of inflected forms in the mental lexicon, both at the level of individual lexemes (Entropy) and at the general level of the class of nouns (Relative Entropy).
  • Baayen, H., & Danziger, E. (1993). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.14 1993. Nijmegen: MPI for Psycholinguistics.
  • Bastiaansen, M. C. M., Van der Linden, M., Ter Keurs, M., Dijkstra, T., & Hagoort, P. (2005). Theta responses are involved in lexico-semantic retrieval during language processing. Journal of Cognitive Neuroscience, 17, 530-541. doi:10.1162/0898929053279469.

    Abstract

    Oscillatory neuronal dynamics, observed in the human electroencephalogram (EEG) during language processing, have been related to the dynamic formation of functionally coherent networks that serve the role of integrating the different sources of information needed for understanding the linguistic input. To further explore the functional role of oscillatory synchrony during language processing, we quantified event-related EEG power changes induced by the presentation of open-class (OC) words and closed-class (CC) words in a wide range of frequencies (from 1 to 30 Hz), while subjects read a short story. Word presentation induced three oscillatory components: a theta power increase (4–7 Hz), an alpha power decrease (10–12 Hz), and a beta power decrease (16–21 Hz). Whereas the alpha and beta responses showed mainly quantitative differences between the two word classes, the theta responses showed qualitative differences between OC words and CC words: A theta power increase was found over left temporal areas for OC words, but not for CC words. The left temporal theta increase may index the activation of a network involved in retrieving the lexical–semantic properties of the OC items.
  • Bauer, B. L. M. (1987). L’évolution des structures morphologiques et syntaxiques du latin au français. Travaux de linguistique, 14-15, 95-107.
  • Bauer, B. L. M. (2007). Report on the XVIth International Conference on Historical Linguistic. General Linguistics, 43, 145-149.
  • Becker, A., Dittmar, N., Gutmann, M., Klein, W., Rieck, B.-O., Senft, G., Senft, I., Steckner, W., & Thielecke, E. (1978). The unguided learning of German by Spanish and Italian workers: Symposium on the Sociological Analysis of Education and Training Programmes for Migrant Workers and their Families. Paris: UNESCO Documentations and Publications.
  • Belke, E., Brysbaert, M., Meyer, A. S., & Ghyselinck, M. (2005). Age of acquisition effects in picture naming: Evidence for a lexical-semantic competition hypothesis. Cognition, 96, B45-B54. doi:10.1016/j.cognition.2004.11.006.

    Abstract

    In many tasks the effects of frequency and age of acquisition (AoA) on reaction latencies are similar in size. However, in picture naming the AoA-effect is often significantly larger than expected on the basis of the frequency-effect. Previous explanations of this frequency-independent AoA-effect have attributed it to the organisation of the semantic system or to the way phonological word forms are stored in the mental lexicon. Using a semantic blocking paradigm, we show that semantic context effects on naming latencies are more pronounced for late-acquired than for early-acquired words. This interaction between AoA and naming context is likely to arise during lexical-semantic encoding, which we put forward as the locus for the frequency-independent AoA-effect.
  • Belke, E., Meyer, A. S., & Damian, M. F. (2005). Refractory effects in picture naming as assessed in a semantic blocking paradigm. The Quarterly Journal of Experimental Psychology Section A, 58, 667-692. doi:10.1080/02724980443000142.

    Abstract

    In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In , the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system.
  • Belke, E., & Meyer, A. S. (2007). Single and multiple object naming in healthy ageing. Language and Cognitive Processes, 22, 1178-1211. doi:10.1080/01690960701461541.

    Abstract

    We compared the performance of young (college-aged) and older (50+years) speakers in a single object and a multiple object naming task and assessed their susceptibility to semantic and phonological context effects when producing words amidst semantically or phonologically similar or dissimilar words. In single object naming, there were no performance differences between the age groups. In multiple object naming, we observed significant age-related slowing, expressed in longer gazes to the objects and slower speech. In addition, the direction of the phonological context effects differed for the two groups. The results of a supplementary experiment showed that young speakers, when adopting a slow speech rate, coordinated their eye movements and speech differently from the older speakers. Our results imply that age-related slowing in connected speech is not a direct consequence of a slowing of lexical retrieval processes. Instead, older speakers might allocate more processing capacity to speech monitoring processes, which would slow down their concurrent speech planning processes

    Files private

    Request files
  • Bien, H., Levelt, W. J. M., & Baayen, R. H. (2005). Frequency effects in compound production. Proceedings of the National Academy of Sciences of the United States of America, 102(49), 17876-17881.

    Abstract

    Four experiments investigated the role of frequency information in compound production by independently varying the frequencies of the first and second constituent as well as the frequency of the compound itself. Pairs of Dutch noun-noun compounds were selected such that there was a maximal contrast for one frequency while matching the other two frequencies. In a position-response association task, participants first learned to associate a compound with a visually marked position on a computer screen. In the test phase, participants had to produce the associated compound in response to the appearance of the position mark, and we measured speech onset latencies. The compound production latencies varied significantly according to factorial contrasts in the frequencies of both constituting morphemes but not according to a factorial contrast in compound frequency, providing further evidence for decompositional models of speech production. In a stepwise regression analysis of the joint data of Experiments 1-4, however, compound frequency was a significant nonlinear predictor, with facilitation in the low-frequency range and a trend toward inhibition in the high-frequency range. Furthermore, a combination of structural measures of constituent frequencies and entropies explained significantly more variance than a strict decompositional model, including cumulative root frequency as the only measure of constituent frequency, suggesting a role for paradigmatic relations in the mental lexicon.
  • Bohnemeyer, J., & Brown, P. (2007). Standing divided: Dispositional verbs and locative predications in two Mayan languages. Linguistics, 45(5), 1105-1151. doi:0.1515/LING.2007.033.

    Abstract

    The Mayan languages Tzeltal and Yucatec have large form classes of “dispositional” roots which lexicalize spatial properties such as orientation, support/suspension/blockage of motion, and configurations of parts of an entity with respect to other parts. But speakers of the two languages deploy this common lexical resource quite differently. The roots are used in both languages to convey dispositional information (e.g., answering “how” questions), but Tzeltal speakers also use them in canonical locative descriptions (e.g., answering “where” questions), whereas Yucatec speakers only use dispositionals in locative predications when prompted by the context to focus on dispositional properties. We describe the constructions used in locative and dispositional descriptions in response to two different picture stimuli sets. Evidence against the proposal that Tzeltal uses dispositionals to compensate for its single, semantically generic preposition (Brown 1994; Grinevald 2006) comes from the finding that Tzeltal speakers use relational spatial nominals in the “Ground phrase” — the expression of the place at which an entity is located — about as frequently as Yucatec speakers. We consider several alternative hypotheses, including a possible larger typological difference that leads Tzeltal speakers, but not Yucatec speakers, to prefer “theme-specific” verbs not just in locative predications, but in any predication involving a theme argument.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., Ibarretxe-Antuñano, I., Kita, S., Lüpke, F., & Ameka, F. K. (2007). Principles of event segmentation in language: The case of motion events. Language, 83(3), 495-532. doi:10.1353/lan.2007.0116.

    Abstract

    We examine universals and crosslinguistic variation in constraints on event segmentation. Previous typological studies have focused on segmentation into syntactic (Pawley 1987) or intonational units (Givón 1991). We argue that the correlation between such units and semantic/conceptual event representations is language-specific. As an alternative, we introduce the MACRO-EVENT PROPERTY (MEP): a construction has the MEP if it packages event representations such that temporal operators necessarily have scope over all subevents. A case study on the segmentation of motion events into macro-event expressions in eighteen genetically and typologically diverse languages has produced evidence of two types of design principles that impact motion-event segmentation: language-specific lexicalization patterns and universal constraints on form-to-meaning mapping.
  • Bonte, M. L., Mitterer, H., Zellagui, N., Poelmans, H., & Blomert, L. (2005). Auditory cortical tuning to statistical regularities in phonology. Clinical Neurophysiology, 16(12), 2765-2774. doi:10.1016/j.clinph.2005.08.012.

    Abstract

    Objective: Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of phoneme combinations. Methods: We employed an ERP measure indicative of experience-dependent auditory memory traces, the mismatch negativity (MMN). We presented pairs of non-words that differed by the degree of phonotactic probability in a codified passive oddball design that minimizes the contribution of acoustic processes. Results: In Experiment 1 the non-word with high phonotactic probability (notsel) elicited a significantly enhanced MMN as compared to the non-word with low phonotactic probability (notkel). In Experiment 2 this finding was replicated with a non-word pair with a smaller acoustic difference (notsel–notfel). An MMN enhancement was not observed in a third acoustic control experiment with stimuli having comparable phonotactic probability (so–fo). Conclusions: Our data suggest that auditory cortical responses to phoneme clusters are modulated by statistical regularities of phoneme combinations. Significance: This study indicates that the language environment is relevant in shaping the neural processing of speech. Furthermore, it provides a potentially useful design for investigating implicit phonological processing in children with anomalous language functions like dyslexia.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2005). Onset entropy matters: Letter-to-phoneme mappings in seven languages. Reading and Writing, 18, 211-229. doi:10.1007/s11145-005-3001-9.
  • Bowerman, M. (1978). Systematizing semantic knowledge: Changes over time in the child's organization of word meaning. Child Development, 49(4), 977-987.

    Abstract

    Selected spontaneous errors of word choice made between the ages of about 2 and 5 by 2 children whose language development has been followed longitudinally were analyzed for clues to semantic development. The errors involved the children's occasional replacement of a contextually required word by a semantically similar word after weeks or months of using both words appropriately. Because the errors were not present from the beginning and because correct usage prevailed most of the time, the errors cannot be explained by existing accounts of semantic development, which ascribe children's word-choice errors to initial linguistic immaturity. A plausible alternative account likens the errors to adult "slips of the tongue" in which the speaker, in the process of constructing a sentence to express a given meaning, chooses incorrectly among competing semantically related words. Interpreted in this way, the errors indicate that the process of drawing words into structured semantic systems based on shared meaning components begins much earlier than experimental studies have suggested. They also provide evidence for certain differences between children and adults in the planning and monitoring of speech.
  • Bramão, I., Mendonça, A., Faísca, L., Ingvar, M., Petersson, K. M., & Reis, A. (2007). The impact of reading and writing skills on a visuo-motor integration task: A comparison between illiterate and literate subjects. Journal of the International Neuropsychological Society, 13(2), 359-364. doi:10.1017/S1355617707070440.

    Abstract

    Previous studies have shown a significant association between reading skills and the performance on visuo-motor tasks. In order to clarify whether reading and writing skills modulate non-linguistic domains, we investigated the performance of two literacy groups on a visuo-motor integration task with non-linguistic stimuli. Twenty-one illiterate participants and twenty matched literate controls were included in the experiment. Subjects were instructed to use the right or the left index finger to point to and touch a randomly presented target on the right or left side of a touch screen. The results showed that the literate subjects were significantly faster in detecting and touching targets on the left compared to the right side of the screen. In contrast, the presentation side did not affect the performance of the illiterate group. These results lend support to the idea that having acquired reading and writing skills, and thus a preferred left-to-right reading direction, influences visual scanning. (JINS, 2007, 13, 359–364
  • Braun, B. (2005). Production and perception of thematic contrast in German. Oxford: Lang.
  • De Bree, E., Janse, E., & Van de Zande, A. M. (2007). Stress assignment in aphasia: Word and non-word reading and non-word repetition. Brain and Language, 103, 264-275. doi:10.1016/j.bandl.2007.07.003.

    Abstract

    This paper investigates stress assignment in Dutch aphasic patients in non-word repetition, as well as in real-word and non-word reading. Performance on the non-word reading task was similar for the aphasic patients and the control group, as mainly regular stress was assigned to the targets. However, there were group differences on the real-word reading and non-word repetition tasks. Unlike the non-brain-damaged group, the patients showed a strong regularization tendency in their repetition of irregular patterns. The patients’ stress error patterns suggest an impairment in retention or retrieval of targets with irregular stress patterns. Limited verbal short-term memory is proposed as a possible underlying cause for the stress difficulties.
  • Broeder, D., Brugman, H., & Senft, G. (2005). Documentation of languages and archiving of language data at the Max Planck Institute for Psycholinguistics in Nijmegen. Linguistische Berichte, no. 201, 89-103.
  • Broersma, M. (2005). Perception of familiar contrasts in unfamiliar positions. Journal of the Acoustical Society of America, 117(6), 3890-3901. doi:10.1121/1.1906060.
  • Brown, P. (2005). What does it mean to learn the meaning of words? [Review of the book How children learn the meanings of words by Paul Bloom]. Journal of the Learning Sciences, 14(2), 293-300. doi:10.1207/s15327809jls1402_6.
  • Brown, P., & Levinson, S. C. (1993). 'Uphill' and 'downhill' in Tzeltal. Journal of Linguistic Anthropology, 3(1), 46-74. doi:10.1525/jlin.1993.3.1.46.

    Abstract

    In the face of the prevailing assumption among cognitive scientists that human spatial cognition is essentially egocentric, with objects located in reference to the orientation of ego's own body (hence left/right, up/down, and front/back oppositions), the Mayan language Tzeltal provides a telling counterexample. This article examines a set of conceptual oppositions in Tzeltal, uphill/downhill/across, that provides an absolute system of coordinates with respect to which the location of objects and their trajectories on both micro and macro scales are routinely described.
  • Brown, P. (2007). 'She had just cut/broken off her head': Cutting and breaking verbs in Tzeltal. Cognitive Linguistics, 18(2), 319-330. doi:10.1515/COG.2007.019.

    Abstract

    This paper describes the lexical resources for expressing events of cutting and breaking (C&B hereafter) in the Mayan language Tzeltal. This notional set of verbs is not a class in any grammatical sense; C&B verbs are formally undistinguishable from many other transitive state-change verbs. But they nicely reveal the characteristic specificity of Tzeltal verb semantics: C&B actions are finely differentiated according to the spatial and textural properties of the theme object, with no superordinate term meaning 'either cut in general' or 'break in general'. The paper characterizes the semantics of these verbs and shows that in the great majority of cases it does not predict their argument structure.
  • Brown, A. (2005). [Review of the book The resilience of language: What gesture creation in deaf children can tell us about how all children learn language by Susan Goldin-Meadow]. Linguistics, 43(3), 662-666.
  • Brown, P., & Levinson, S. C. (1993). Linguistic and nonlinguistic coding of spatial arrays: Explorations in Mayan cognition. Working Paper 24. Nijmegen, Netherlands: Cognitive Anthropology Research Group, Max Planck Institute for Psycholinguistics.
  • Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.

    Abstract

    This study is about the principles for constructing polite speech. The core of it was published as Brown and Levinson (1978); here it is reissued with a new introduction which surveys the now considerable literature in linguistics, psychology and the social sciences that the original extended essay stimulated, and suggests new directions for research. We describe and account for some remarkable parallelisms in the linguistic construction of utterances with which people express themselves in different languges and cultures. A motive for these parallels is isolated - politeness, broadly defined to include both polite friendliness and polite formality - and a universal model is constructed outlining the abstract principles underlying polite usages. This is based on the detailed study of three unrelated languages and cultures: the Tamil of south India, the Tzeltal spoken by Mayan Indians in Chiapas, Mexico, and the English of the USA and England, supplemented by examples from other cultures. Of general interest is the point that underneath the apparent diversity of polite behaviour in different societies lie some general pan-human principles of social interaction, and the model of politeness provides a tool for analysing the quality of social relations in any society.
  • Brown, C. M., & Hagoort, P. (1993). The processing nature of the N400: Evidence from masked priming. Journal of Cognitive Neuroscience, 5, 34-44. doi:10.1162/jocn.1993.5.1.34.

    Abstract

    The N400 is an endogenous event-related brain potential (ERP) that is sensitive to semantic processes during language comprehension. The general question we address in this paper is which aspects of the comprehension process are manifest in the N400. The focus is on the sensitivity of the N400 to the automatic process of lexical access, or to the controlled process of lexical integration. The former process is the reflex-like and effortless behavior of computing a form representation of the linguistic signal, and of mapping this representation onto corresponding entries in the mental lexicon. The latter process concerns the integration of a spoken or written word into a higher-order meaning representation of the context within which it occurs. ERPs and reaction times (RTs) were acquired to target words preceded by semantically related and unrelated prime words. The semantic relationship between a prime and its target has been shown to modulate the amplitude of the N400 to the target. This modulation can arise from lexical access processes, reflecting the automatic spread of activation between words related in meaning in the mental lexicon. Alternatively, the N400 effect can arise from lexical integration processes, reflecting the relative ease of meaning integration between the prime and the target. To assess the impact of automatic lexical access processes on the N400, we compared the effect of masked and unmasked presentations of a prime on the N400 to a following target. Masking prevents perceptual identification, and as such it is claimed to rule out effects from controlled processes. It therefore enables a stringent test of the possible impact of automatic lexical access processes on the N400. The RT study showed a significant semantic priming effect under both unmasked and masked presentations of the prime. The result for masked priming reflects the effect of automatic spreading of activation during the lexical access process. The ERP study showed a significant N400 effect for the unmasked presentation condition, but no such effect for the masked presentation condition. This indicates that the N400 is not a manifestation of lexical access processes, but reflects aspects of semantic integration processes.
  • Burenhult, N. (2005). A grammar of Jahai. Canberra: Pacific Linguistics.
  • Byun, K.-S. (2007). Becoming friends with Korean Sign Language. Cheonan: Chungnam Association of the Deaf.
  • Cameron-Faulkner, T., & Kidd, E. (2007). I'm are what I'm are: The acquisition of first-person singular present BE. Cognitive Linguistics, 18(1), 1-22. doi:10.1515/COG.2007.001.

    Abstract

    The present study investigates the development of am in the speech of one English-speaking child, Scarlett (aged 4;6–5;6). We show that am is infrequent in the speech addressed to children; the acquisition of this form of BE presents a unique insight into the processes underlying language development because children have little evidence regarding its correct use. Scarlett produced a pervasive error where she overextended are to first-person singular contexts where am was required (e.g., I'm are trying, When are I'm finished?). Am gradually emerged in her speech on what appears to be a construction-specific basis. The findings of the study are used in support of a usage-based, constructivisit approach to language development.
  • Chen, A., Den Os, E., & De Ruiter, J. P. (2007). Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. The Linguistic Review, 24(2), 317-344. doi:10.1515/TLR.2007.012.

    Abstract

    Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition.
  • Chen, A. (2005). Universal and language-specific perception of paralinguistic intonational meaning. Utrecht: LOT.
  • Chen, X. S., Rozhdestvensky, T. S., Collins, L. J., Schmitz, J., & Penny, D. (2007). Combined experimental and computational approach to identify non-protein-coding RNAs in the deep-branching eukaryote Giardia intestinalis. Nucleic Acids Research, 35, 4619-4628. doi:10.1093/nar/gkm474.

    Abstract

    Non-protein-coding RNAs represent a large proportion of transcribed sequences in eukaryotes. These RNAs often function in large RNA–protein complexes, which are catalysts in various RNA-processing pathways. As RNA processing has become an increasingly important area of research, numerous non-messenger RNAs have been uncovered in all the model eukaryotic organisms. However, knowledge on RNA processing in deep-branching eukaryotes is still limited. This study focuses on the identification of non-protein-coding RNAs from the diplomonad parasite Giardia intestinalis, showing that a combined experimental and computational search strategy is a fast method of screening reduced or compact genomes. The analysis of our Giardia cDNA library has uncovered 31 novel candidates, including C/D-box and H/ACA box snoRNAs, as well as an unusual transcript of RNase P, and double-stranded RNAs. Subsequent computational analysis has revealed additional putative C/D-box snoRNAs. Our results will lead towards a future understanding of RNA metabolism in the deep-branching eukaryote Giardia, as more ncRNAs are characterized.
  • Chen, J. (2007). 'He cut-break the rope': Encoding and categorizing cutting and breaking events in Mandarin. Cognitive Linguistics, 18(2), 273-285. doi:10.1515/COG.2007.015.

    Abstract

    Abstract Mandarin categorizes cutting and breaking events on the basis of fine semantic distinctions in the causal action and the caused result. I demonstrate the semantics of Mandarin C&B verbs from the perspective of event encoding and categorization as well as argument structure alternations. Three semantically different types of predicates can be identified: verbs denoting the C&B action subevent, verbs encoding the C&B result subevent, and resultative verb compounds (RVC) that encode both the action and the result subevents. The first verb of an RVC is basically dyadic, whereas the second is monadic. RVCs as a whole are also basically dyadic, and do not undergo detransitivization.
  • Cho, T., & McQueen, J. M. (2005). Prosodic influences on consonant production in Dutch: Effects of prosodic boundaries, phrasal accent and lexical stress. Journal of Phonetics, 33(2), 121-157. doi:10.1016/j.wocn.2005.01.001.

    Abstract

    Prosodic influences on phonetic realizations of four Dutch consonants (/t d s z/) were examined. Sentences were constructed containing these consonants in word-initial position; the factors lexical stress, phrasal accent and prosodic boundary were manipulated between sentences. Eleven Dutch speakers read these sentences aloud. The patterns found in acoustic measurements of these utterances (e.g., voice onset time (VOT), consonant duration, voicing during closure, spectral center of gravity, burst energy) indicate that the low-level phonetic implementation of all four consonants is modulated by prosodic structure. Boundary effects on domain-initial segments were observed in stressed and unstressed syllables, extending previous findings which have been on stressed syllables alone. Three aspects of the data are highlighted. First, shorter VOTs were found for /t/ in prosodically stronger locations (stressed, accented and domain-initial), as opposed to longer VOTs in these positions in English. This suggests that prosodically driven phonetic realization is bounded by language-specific constraints on how phonetic features are specified with phonetic content: Shortened VOT in Dutch reflects enhancement of the phonetic feature {−spread glottis}, while lengthened VOT in English reflects enhancement of {+spread glottis}. Prosodic strengthening therefore appears to operate primarily at the phonetic level, such that prosodically driven enhancement of phonological contrast is determined by phonetic implementation of these (language-specific) phonetic features. Second, an accent effect was observed in stressed and unstressed syllables, and was independent of prosodic boundary size. The domain of accentuation in Dutch is thus larger than the foot. Third, within a prosodic category consisting of those utterances with a boundary tone but no pause, tokens with syntactically defined Phonological Phrase boundaries could be differentiated from the other tokens. This syntactic influence on prosodic phrasing implies the existence of an intermediate-level phrase in the prosodic hierarchy of Dutch.
  • Cho, T. (2005). Prosodic strengthening and featural enhancement: Evidence from acoustic and articulatory realizations of /a,i/ in English. Journal of the Acoustical Society of America, 117(6), 3867-3878. doi:10.1121/1.1861893.
  • Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.

    Abstract

    We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition.
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Coombs, P. J., Graham, S. A., Drickamer, K., & Taylor, M. E. (2005). Selective binding of the scavenger receptor C-type lectin to Lewisx trisaccharide and related glycan ligands. The Journal of Biological Chemistry, 280, 22993-22999. doi:10.1074/jbc.M504197200.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is an endothelial receptor that is similar in organization to type A scavenger receptors for modified low density lipoproteins but contains a C-type carbohydrate-recognition domain (CRD). Fragments of the receptor consisting of the entire extracellular domain and the CRD have been expressed and characterized. The extracellular domain is a trimer held together by collagen-like and coiled-coil domains adjacent to the CRD. The amino acid sequence of the CRD is very similar to the CRD of the asialoglycoprotein receptor and other galactose-specific receptors, but SRCL binds selectively to asialo-orosomucoid rather than generally to asialoglycoproteins. Screening of a glycan array and further quantitative binding studies indicate that this selectivity results from high affinity binding to glycans bearing the Lewis(x) trisaccharide. Thus, SRCL shares with the dendritic cell receptor DC-SIGN the ability to bind the Lewis(x) epitope. However, it does so in a fundamentally different way, making a primary binding interaction with the galactose moiety of the glycan rather than the fucose residue. SRCL shares with the asialoglycoprotein receptor the ability to mediate endocytosis and degradation of glycoprotein ligands. These studies suggest that SRCL might be involved in selective clearance of specific desialylated glycoproteins from circulation and/or interaction of cells bearing Lewis(x)-type structures with the vascular endothelium.
  • Cronin, K. A., Kurian, A. V., & Snowdon, C. T. (2005). Cooperative problem solving in a cooperatively breeding primate. Animal Behaviour, 69, 133-142. doi:10.1016/j.anbehav.2004.02.024.

    Abstract

    We investigated cooperative problem solving in unrelated pairs of the cooperatively breeding cottontop tamarin, Saguinus oedipus, to assess the cognitive basis of cooperative behaviour in this species and to compare abilities with other apes and monkeys. A transparent apparatus was used that required extension of two handles at opposite ends of the apparatus for access to rewards. Resistance was applied to both handles so that two tamarins had to act simultaneously in order to receive rewards. In contrast to several previous studies of cooperation, both tamarins received rewards as a result of simultaneous pulling. The results from two experiments indicated that the cottontop tamarins (1) had a much higher success rate and efficiency of pulling than many of the other species previously studied, (2) adjusted pulling behaviour to the presence or absence of a partner, and (3) spontaneously developed sustained pulling techniques to solve the task. These findings suggest that cottontop tamarins understand the role of the partner in this cooperative task, a cognitive ability widely ascribed only to great apes. The cooperative social system of tamarins, the intuitive design of the apparatus, and the provision of rewards to both participants may explain the performance of the tamarins.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Mahwah, NJ: Erlbaum.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A. (1985). Cross-language psycholinguistics. Linguistics, 23, 659-667.
  • Cutler, A., & Fay, D. A. (Eds.). (1978). [Annotated re-issue of R. Meringer and C. Mayer: Versprechen und Verlesen, 1895]. Amsterdam: John Benjamins.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A., & Cooper, W. E. (1978). Phoneme-monitoring in the context of different phonetic sequences. Journal of Phonetics, 6, 221-225.

    Abstract

    The order of some conjoined words is rigidly fixed (e.g. dribs and drabs/*drabs and dribs). Both phonetic and semantic factors can play a role in determining the fixed order. An experiment was conducted to test whether listerners’ reaction times for monitoring a predetermined phoneme are influenced by phonetic constraints on ordering. Two such constraints were investigated: monosyllable-bissyllable and high-low vowel sequences. In English, conjoined words occur in such sequences with much greater frequency than their converses, other factors being equal. Reaction times were significantly shorter for phoneme monitoring in monosyllable-bisyllable sequences than in bisyllable- monosyllable sequences. However, reaction times were not significantly different for high-low vs. low-high vowel sequences.
  • Cutler, A. (1993). Phonological cues to open- and closed-class words in the processing of spoken sentences. Journal of Psycholinguistic Research, 22, 109-131.

    Abstract

    Evidence is presented that (a) the open and the closed word classes in English have different phonological characteristics, (b) the phonological dimension on which they differ is one to which listeners are highly sensitive, and (c) spoken open- and closed-class words produce different patterns of results in some auditory recognition tasks. What implications might link these findings? Two recent lines of evidence from disparate paradigms—the learning of an artificial language, and natural and experimentally induced misperception of juncture—are summarized, both of which suggest that listeners are sensitive to the phonological reflections of open- vs. closed-class word status. Although these correlates cannot be strictly necessary for efficient processing, if they are present listeners exploit them in making word class assignments. That such a use of phonological information is of value to listeners could be indirect evidence that open- vs. closed-class words undergo different processing operations. Parts of the research reported in this paper were carried out in collaboration with Sally Butterfield and David Carter, and supported by the Alvey Directorate (United Kingdom). Jonathan Stankler's master's research was supported by the Science and Engineering Research Council (United Kingdom). Thanks to all of the above, and to Merrill Garrett, Mike Kelly, James McQueen, and Dennis Norris for further assistance.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. R. (1993). Problems with click detection: Insights from cross-linguistic comparisons. Speech Communication, 13, 401-410. doi:10.1016/0167-6393(93)90038-M.

    Abstract

    Cross-linguistic comparisons may shed light on the levels of processing involved in the performance of psycholinguistic tasks. For instance, if the same pattern of results appears whether or not subjects understand the experimental materials, it may be concluded that the results do not reflect higher-level linguistic processing. In the present study, English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in the listeners' responses. It is concluded that click detection tasks are primarily sensitive to low-level (e.g. acoustic) effects, and hence are not well suited to the investigation of linguistic processing.
  • Cutler, A. (1993). Segmentation problems, rhythmic solutions. Lingua, 92, 81-104. doi:10.1016/0024-3841(94)90338-7.

    Abstract

    The lexicon contains discrete entries, which must be located in speech input in order for speech to be understood; but the continuity of speech signals means that lexical access from spoken input involves a segmentation problem for listeners. The speech environment of prelinguistic infants may not provide special information to assist the infant listeners in solving this problem. Mature language users in possession of a lexicon might be thought to be able to avoid explicit segmentation of speech by relying on information from successful lexical access; however, evidence from adult perceptual studies indicates that listeners do use explicit segmentation procedures. These procedures differ across languages and seem to exploit language-specific rhythmic structure. Efficient as these procedures are, they may not have been developed in response to statistical properties of the input, because bilinguals, equally competent in two languages, apparently only possess one rhythmic segmentation procedure. The origin of rhythmic segmentation may therefore lie in the infant's exploitation of rhythm to solve the segmentation problem and gain a first toehold on lexical acquisition. Recent evidence from speech production and perception studies with prelinguistic infants supports the claim that infants are sensitive to rhythmic structure and its relationship to lexical segmentation.
  • Cutler, A. (1993). Segmenting speech in different languages. The Psychologist, 6(10), 453-455.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Mehler, J. (1993). The periodicity bias. Journal of Phonetics, 21, 101-108.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A., Hawkins, J. A., & Gilligan, G. (1985). The suffixing preference: A processing explanation. Linguistics, 23, 723-758.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Hillsdale, NJ: Erlbaum.
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. PNAS, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Deutsch, W., & Frauenfelder, U. (1985). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.6 1985. Nijmegen: MPI for Psycholinguistics.
  • Diesveld, P., & Kempen, G. (1993). Zinnen als bouwwerken: Computerprogramma's voor grammatica-oefeningen. MOER, Tijdschrift voor onderwijs in het Nederlands, 1993(4), 130-138.
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dijkstra, T., & Kempen, G. (Eds.). (1993). Einführung in die Psycholinguistik. München: Hans Huber.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dijkstra, T. (1993). Taalpsychologie (G. Kempen, Ed.). Groningen: Wolters-Noordhoff.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Dunn, M., Terrill, A., Reesink, G., Foley, R. A., & Levinson, S. C. (2005). Structural phylogenetics and the reconstruction of ancient language history. Science, 309(5743), 2072-2075. doi:10.1126/science.1114615.
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Eibl-Eibesfeldt, I., & Senft, G. (1987). Studienbrief Rituelle Kommunikation. Hagen: FernUniversität Gesamthochschule Hagen, Fachbereich Erziehungs- und Sozialwissenschaften, Soziologie, Kommunikation - Wissen - Kultur.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1987). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. Publikation zu Wissenschaftlichen Filmen, Sektion Ethnologie, 25, 1-15.
  • Eisner, F., & McQueen, J. M. (2005). The specificity of perceptual learning in speech processing. Perception & Psychophysics, 67(2), 224-238.

    Abstract

    We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, “chicory”; witlos is meaningless) subsequently categorized more sounds on an [εf]–[εs] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.
  • Enfield, N. J., & Stivers, T. (Eds.). (2007). Person reference in interaction: Linguistic, cultural, and social perspectives. Cambridge: Cambridge University Press.

    Abstract

    How do we refer to people in everyday conversation? No matter the language or culture, we must choose from a range of options: full name ('Robert Smith'), reduced name ('Bob'), description ('tall guy'), kin term ('my son') etc. Our choices reflect how we know that person in context, and allow us to take a particular perspective on them. This book brings together a team of leading linguists, sociologists and anthropologists to show that there is more to person reference than meets the eye. Drawing on video-recorded, everyday interactions in nine languages, it examines the fascinating ways in which we exploit person reference for social and cultural purposes, and reveals the underlying principles of person reference across cultures from the Americas to Asia to the South Pacific. Combining rich ethnographic detail with cross-linguistic generalizations.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations: Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

    Abstract

    Central to cultural, social, and conceptual life are cognitive arti-facts, the perceptible structures which populate our world and mediate our navigation of it, complementing, enhancing, and altering available affordances for the problem-solving challenges of everyday life. Much work in this domain has concentrated on technological artifacts, especially manual tools and devices and the conceptual and communicative tools of literacy and diagrams. Recent research on hand gestures and other bodily movements which occur during speech shows that the human body serves a number of the functions of "cognitive technologies," affording the special cognitive advantages claimed to be associated exclusively with enduring (e.g., printed or drawn) diagrammatic representations. The issue is explored with reference to extensive data from video-recorded interviews with speakers of Lao in Vientiane, Laos, which show integration of verbal descriptions with complex spatial representations akin to diagrams. The study has implications both for research on cognitive artifacts (namely, that the body is a visuospatial representational resource not to be overlooked) and for research on ethnogenealogical knowledge (namely, that hand gestures reveal speakers' conceptualizations of kinship structure which are of a different nature to and not necessarily retrievable from the accompanying linguistic code).
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2007). A grammar of Lao. Berlin: Mouton de Gruyter.

    Abstract

    Lao is the national language of Laos, and is also spoken widely in Thailand and Cambodia. It is a tone language of the Tai-Kadai family (Southwestern Tai branch). Lao is an extreme example of the isolating, analytic language type. This book is the most comprehensive grammatical description of Lao to date. It describes and analyses the important structures of the language, including classifiers, sentence-final particles, and serial verb constructions. Special attention is paid to grammatical topics from a semantic, pragmatic, and typological perspective.
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.

Share this page