Publications

Displaying 1 - 100 of 451
  • Abdel Rahman, R., Sommer, W., & Schweinberger, S. R. (2002). Brain potential evidence for the time course of access to biographical facts and names of familiar persons. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(2), 366-373. doi:10.1037//0278-7393.28.2.366.

    Abstract

    On seeing familiar persons, biographical (semantic) information is typically retrieved faster and more accurately than name information. Serial stage models explain this pattern by suggesting that access to the name follows the retrieval of semantic information. In contrast, interactive activation and competition (IAC) models hold that both processes start together but name retrieval is slower because of structural peculiarities. With a 2-choice go/no-go procedure based on a semantic and a name-related classification, the authors tested differential predictions of the 2 alternative models for reaction times (RTs) and lateralized readiness potentials (LRP). Both LRP (Experiment 1) and RT (Experiment 2) results are in line with IAC models of face identification and naming.
  • Adank, P., Smits, R., & Van Hout, R. (2004). A comparison of vowel normalization procedures for language variation research. Journal of the Acoustical Society of America, 116(5), 3099-3109. doi:10.1121/1.1795335.

    Abstract

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).
  • Adank, P., Van Hout, R., & Smits, R. (2004). An acoustic description of the vowels of Northern and Southern Standard Dutch. Journal of the Acoustical Society of America, 116(3), 1729-1738. doi:10.1121/1.1779271.
  • Aleman, A., Formisano, E., Koppenhagen, H., Hagoort, P., De Haan, E. H. F., & Kahn, R. S. (2005). The functional neuroanatomy of metrical stress evaluation of perceived and imagined spoken words. Cerebral Cortex, 15(2), 221-228. doi:10.1093/cercor/bhh124.

    Abstract

    We hypothesized that areas in the temporal lobe that have been implicated in the phonological processing of spoken words would also be activated during the generation and phonological processing of imagined speech. We tested this hypothesis using functional magnetic resonance imaging during a behaviorally controlled task of metrical stress evaluation. Subjects were presented with bisyllabic words and had to determine the alternation of strong and weak syllables. Thus, they were required to discriminate between weak-initial words and strong-initial words. In one condition, the stimuli were presented auditorily to the subjects (by headphones). In the other condition the stimuli were presented visually on a screen and subjects were asked to imagine hearing the word. Results showed activation of the supplementary motor area, inferior frontal gyrus (Broca's area) and insula in both conditions. In the superior temporal gyrus (STG) and in the superior temporal sulcus (STS) strong activation was observed during the auditory (perceptual) condition. However, a region located in the posterior part of the STS/STG also responded during the imagery condition. No activation of this same region of the STS was observed during a control condition which also involved processing of visually presented words, but which required a semantic decision from the subject. We suggest that processing of metrical stress, with or without auditory input, relies in part on cortical interface systems located in the posterior part of STS/STG. These results corroborate behavioral evidence regarding phonological loop involvement in auditory–verbal imagery.
  • Alibali, M. W., Flevares, L. M., & Goldin-Meadow, S. (1997). Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology, 89(1), 183-193. doi:10.1037/0022-0663.89.1.183.

    Abstract

    Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced
    to the children's gestures. Sensitivity to gesture was comparable for teachers and
    undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands.
  • Allen, G. L., Kirasic, K. C., Rashotte, M. A., & Haun, D. B. M. (2004). Aging and path integration skill: Kinesthetic and vestibular contributions to wayfinding. Perception & Psychophysics, 66(1), 170-179.

    Abstract

    In a triangle completion task designed to assess path integration skill, younger and older adults performed
    similarly after being led, while blindfolded, along the route segments on foot, which provided both kinesthetic and vestibular information about the outbound path. In contrast, older adults’ performance was impaired, relative to that of younger adults, after they were conveyed, while blindfolded,
    along the route segments in a wheelchair, which limited them principally to vestibular information.
    Correlational evidence suggested that cognitive resources were significant factors in accounting for age-related decline in path integration performance.
  • Ameka, F. K. (2002). Cultural scripting of body parts for emotions: On 'jealousy' and related emotions in Ewe. Pragmatics and Cognition, 10(1-2), 27-55. doi:10.1075/pc.10.12.03ame.

    Abstract

    Different languages present a variety of ways of talking about emotional experience. Very commonly, feelings are described through the use of ‘body image constructions’ in which they are associated with processes in, or states of, specific body parts. The emotions and the body parts that are thought to be their locus and the kind of activity associated with these body parts vary cross-culturally. This study focuses on the meaning of three ‘body image constructions’ used to describe feelings similar to, but also different from, English ‘jealousy’, ‘envy’, and ‘covetousness’ in the West African language Ewe. It is demonstrated that a ‘moving body’, a pychologised eye, and red eyes are scripted for these feelings. It is argued that the expressions are not figurative and that their semantics provide good clues to understanding the cultural construction of emotions both emotions and the body.
  • Ameka, F. K., & Breedveld, A. (2004). Areal cultural scripts for social interaction in West African communities. Intercultural Pragmatics, 1(2), 167-187. doi:10.1515/iprg.2004.1.2.167.

    Abstract

    Ways of interacting and not interacting in human societies have social, cognitive and cultural dimensions. These various aspects may be reflected in particular in relation to “taboos”. They reflect the ways of thinking and the values of a society. They are recognized as part of the communicative competence of the speakers and are learned in socialization. Some salient taboos are likely to be named in the language of the relevant society, others may not have a name. Interactional taboos can be specific to a cultural linguistic group or they may be shared across different communities that belong to a ‘speech area’ (Hymes 1972). In this article we describe a number of unnamed norms of communicative conduct which are widespread in West Africa such as the taboos on the use of the left hand in social interaction and on the use of personal names in adult address, and the widespread preference for the use of intermediaries for serious communication. We also examine a named avoidance (yaage) behavior specific to the Fulbe, a nomadic cattle-herding group spread from West Africa across the Sahel as far as Sudan. We show how tacit knowledge about these taboos and other interactive norms can be captured using the cultural scripts methodology.
  • Ameka, F. K. (2004). Grammar and cultural practices: The grammaticalization of triadic communication in West African languages. The Journal of West African Languages, 30(2), 5-28.
  • Andrieu, C., Figuerola, H., Jacquemot, E., Le Guen, O., Roullet, J., & Salès, C. (2005). Parfum de rose, odeur de sainteté: Un sermon Tzeltal sur la première sainte des Amériques. Ateliers du LESC, 29, 11-67. Retrieved from http://ateliers.revues.org/document174.html.
  • Baayen, R. H., & Moscoso del Prado Martín, F. (2005). Semantic density and past-tense formation in three Germanic languages. Language, 81(3), 666-698. doi:10.1353/lan.2005.0112.

    Abstract

    it is widely believed that the difference between regular and irregular verbs is restricted to form. This study questions that belief. We report a series of lexical statistics showing that irregular verbs cluster in denser regions in semantic space. Compared to regular verbs, irregular verbs tend to have more semantic neighbors that in turn have relatively many other semantic neighbors that are morphologically irregular. We show that this greater semantic density for irregulars is reflected in association norms, familiarity ratings, visual lexical-decision latencies, and word-naming latencies. Meta-analyses of the materials of two neuroimaging studies show that in these studies, regularity is confounded with differences in semantic density. Our results challenge the hypothesis of the supposed formal encapsulation of rules of inflection and support lines of research in which sensitivity to probability is recognized as intrinsic to human language.
  • Baayen, H., & Lieber, R. (1991). Productivity and English derivation: A corpus-based study. Linguistics, 29(5), 801-843. doi:10.1515/ling.1991.29.5.801.

    Abstract

    The notion of productivity is one which is central to the study of morphology.
    It is a notion about which linguists frequently have intuitions. But it is a notion which still
    remains somewhat problematic in the
    literature on generative morphology some
    15 years after Aronoff raised the issue in his (1976) monograph. In this paper we will review some of the definitions and measures of productivity discussed in the generative and pregenerative literature.
    We will adopt the definition of productivity suggested by Schultink (1961) and propose
    a number of statistical measures of productivity whose results, when
    applied to a fixed corpus, accord nicely with our intuitive estimates of productivity, and which shed light on the quantitative weight of linguistic restrictions on word formation rules. Part of our
    purpose here is also a very
    simple one: to make
    available a substantial
    set of empirical data concerning
    the productivity of
    some of the major derivational
    affixes of English.

    Files private

    Request files
  • Baayen, R. H., Dijkstra, T., & Schreuder, R. (1997). Singulars and Plurals in Dutch: Evidence for a Parallel Dual-Route Model. Journal of Memory and Language, 37(1), 94-117. doi:10.1006/jmla.1997.2509.

    Abstract

    Are regular morphologically complex words stored in the mental lexicon? Answers to this question have ranged from full listing to parsing for every regular complex word. We investigated the roles of storage and parsing in the visual domain for the productive Dutch plural suffix -en.Two experiments are reported that show that storage occurs for high-frequency noun plurals. A mathematical formalization of a parallel dual-route race model is presented that accounts for the patterns in the observed reaction time data with essentially one free parameter, the speed of the parsing route. Parsing for noun plurals appears to be a time-costly process, which we attribute to the ambiguity of -en,a suffix that is predominantly used as a verbal ending. A third experiment contrasted nouns and verbs. This experiment revealed no effect of surface frequency for verbs, but again a solid effect for nouns. Together, our results suggest that many noun plurals are stored in order to avoid the time-costly resolution of the subcategorization conflict that arises when the -ensuffix is attached to nouns.

    Files private

    Request files
  • Baayen, R. H. (1997). The pragmatics of the 'tenses' in biblical Hebrew. Studies in Language, 21(2), 245-285. doi:10.1075/sl.21.2.02baa.

    Abstract

    In this paper, I present an analysis of the so-called tense forms of Biblical Hebrew. While there is fairly broad consensus on the interpretation of the yiqtol tense form, the interpretation of the qdtal tense form has led to considerable controversy. I will argue that the qātal form has no intrinsic semantic value and that it serves a pragmatic function only, namely, signaling to the hearer that the event or state expressed by the verb cannot be tightly integrated into the discourse representation of the hearer, given the speaker's estimate of their common ground.
  • Baayen, R. H., Lieber, R., & Schreuder, R. (1997). The morphological complexity of simplex nouns. Linguistics, 35, 861-877. doi:10.1515/ling.1997.35.5.861.
  • Baayen, R. H., & Lieber, R. (1997). Word frequency distributions and lexical semantics. Computers and the Humanities, 30, 281-291.

    Abstract

    This paper addresses the relation between meaning, lexical productivity, and frequency of use. Using density estimation as a visualization tool, we show that differences in semantic structure can be reflected in probability density functions estimated for word frequency distributions. We call attention to an example of a bimodal density, and suggest that bimodality arises when distributions of well-entrenched lexical tems, which appear to be lognormal, are mixed with distributions of productively reated nonce formations
  • Bastiaansen, M. C. M., Van Berkum, J. J. A., & Hagoort, P. (2002). Syntactic processing modulates the θ rhythm of the human EEG. NeuroImage, 17, 1479-1492. doi:10.1006/nimg.2002.1275.

    Abstract

    Changes in oscillatory brain dynamics can be studied by means of induced band power (IBP) analyses, which quantify event-related changes in amplitude of frequency-specific EEG rhythms. Such analyses capture EEG phenomena that are not part of traditional event-related potential measures. The present study investigated whether IBP changes in the δ, θ, and α frequency ranges are sensitive to syntactic violations in sentences. Subjects read sentences that either were correct or contained a syntactic violation. The violations were either grammatical gender agreement violations, where a prenominal adjective was not appropriately inflected for the head noun's gender, or number agreement violations, in which a plural quantifier was combined with a singular head noun. IBP changes of the concurrently measured EEG were computed in five frequency bands of 2-Hz width, individually adjusted on the basis of subjects' α peak, ranging approximately from 2 to 12 Hz. Words constituting a syntactic violation elicited larger increases in θ power than the same words in a correct sentence context, in an interval of 300–500 ms after word onset. Of all the frequency bands studied, this was true for the θ frequency band only. The scalp topography of this effect was different for different violations: following number violations a left-hemispheric dominance was found, whereas gender violations elicited a right-hemisphere dominance of the θ power increase. Possible interpretations of this effect are considered in closing.
  • Bastiaansen, M. C. M., Van der Linden, M., Ter Keurs, M., Dijkstra, T., & Hagoort, P. (2005). Theta responses are involved in lexico-semantic retrieval during language processing. Journal of Cognitive Neuroscience, 17, 530-541. doi:10.1162/0898929053279469.

    Abstract

    Oscillatory neuronal dynamics, observed in the human electroencephalogram (EEG) during language processing, have been related to the dynamic formation of functionally coherent networks that serve the role of integrating the different sources of information needed for understanding the linguistic input. To further explore the functional role of oscillatory synchrony during language processing, we quantified event-related EEG power changes induced by the presentation of open-class (OC) words and closed-class (CC) words in a wide range of frequencies (from 1 to 30 Hz), while subjects read a short story. Word presentation induced three oscillatory components: a theta power increase (4–7 Hz), an alpha power decrease (10–12 Hz), and a beta power decrease (16–21 Hz). Whereas the alpha and beta responses showed mainly quantitative differences between the two word classes, the theta responses showed qualitative differences between OC words and CC words: A theta power increase was found over left temporal areas for OC words, but not for CC words. The left temporal theta increase may index the activation of a network involved in retrieving the lexical–semantic properties of the OC items.
  • Bastiaansen, M. C. M., Posthuma, D., Groot, P. F. C., & De Geus, E. J. C. (2002). Event-related alpha and theta responses in a visuo-spatial working memory task. Clinical Neurophysiology, 113(12), 1882-1893. doi:10.1016/S1388-2457(02)00303-6.

    Abstract

    Objective: To explore the reactivity of the theta and alpha rhythms during visuo-spatial working memory. Methods: One hundred and seventy-four subjects performed a delayed response task. They had to remember the spatial location of a target stimulus on a computer screen for a 1 or a 4 s retention interval. The target either remained visible throughout the entire interval (sensory trials) or disappeared after 150 ms (memory trials). Changes in induced band power (IBP) in the electroencephalogram (EEG) were analyzed in 4 narrow, individually adjusted frequency bands between 4 and 12 Hz. Results: After presentation of the target stimulus, a phasic power increase was found, irrespective of condition and delay interval, in the lower (roughly, 4–8 Hz) frequency bands, with a posterior maximum. During the retention interval, sustained occipital–parietal alpha power increase and frontal theta power decrease were found. Most importantly, the memory trials showed larger IBP decreases in the theta band over frontal electrodes than the sensory trials. Conclusions: The phasic power increase following target onset is interpreted to reflect encoding of the target location. The sustained theta decrease, which is larger for memory trials, is tentatively interpreted to reflect visuo-spatial working memory processes.
  • Bastiaansen, M. C. M., Van Berkum, J. J. A., & Hagoort, P. (2002). Event-related theta power increases in the human EEG during online sentence processing. Neuroscience Letters, 323(1), 13-16. doi:10.1016/S0304-3940(01)02535-6.

    Abstract

    By analyzing event-related changes in induced band power in narrow frequency bands of the human electroencephalograph, the present paper explores a possible functional role of the alpha and theta rhythms during the processing of words and of sentences. The results show a phasic power increase in the theta frequency range, together with a phasic power decrease in the alpha frequency range, following the presentation of words in a sentence. These effects may be related to word processing, either lexical or in relation to sentence context. Most importantly, there is a slow and highly frequency-specific increase in theta power as a sentence unfolds, possibly related to the formation of an episodic memory trace, or to incremental verbal working memory load.
  • Bastiaansen, M. C. M., Böcker, K. B. E., & Brunia, C. H. M. (2002). ERD as an index of anticipatory attention? Effects of stimulus degradation. Psychophysiology, 39(1), 16-28. doi:10.1111/1469-8986.3910016.

    Abstract

    Previous research has suggested that the stimulus-preceding negativity (SPN) is largely independent of stimulus modality. In contrast, the scalp topography of the event related desynchronization (ERD) related to the anticipation of stimuli providing knowledge of results (KR) is modality dependent. These findings, combined with functional SPN research, lead to the hypothesis that anticipatory ERD reflects anticipatory attention, whereas the SPN mainly depends on the affective-motivational properties of the anticipated stimulus. To further investigate the prestimulus ERD, and compare this measure with the SPN, 12 participants performed a time-estimation task, and were informed about the quality of their time estimation by an auditory or a visual stimulus providing KR. The KR stimuli could be either intact or degraded. Auditory degraded KR stimuli were less effective than other KR stimuli in guiding subsequent behavior, and were preceded by a larger SPN. There were no effects of degradation on the SPN in the visual modality. Preceding auditory KR stimuli no ERD was present, whereas preceding visual stimuli an occipital ERD was found. However, contrary to expectation, the latter was larger preceding intact than preceding degraded stimuli. It is concluded that the data largely agree with an interpretation of the pre-KR SPN as a reflection of the anticipation of the affective-motivational value of KR stimuli, and of the prestimulus ERD as a perceptual anticipatory attention process.
  • Bauer, B. L. M. (2004). Vigesimal numerals in Romance: An Indo-European perspective. General Linguistics, 41, 21-46.
  • Bauer, B. L. M. (2004). [Review of the book Pre-Indo-European by Winfred P. Lehmann]. Journal of Indo-European Studies, 32, 146-155.
  • Bauer, B. L. M. (1997). Response to David Lightfoot’s Review of The Emergence and Development of SVO Patterning in Latin and French: Diachronic and Psycholinguistic Perspectives. Language, 73(2), 352-358.
  • Bauer, B. L. M. (2002). Variability in word order: Adjectives and comparatives in Latin, Romance, and Germanic. Southwest Journal of Linguistics, 20, 19-50.
  • Baumann, H., Dirksmeyer, R., & Wittenburg, P. (2004). Long-term archiving. Language Archive Newsletter, 1(2), 3-3.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Belke, E., Brysbaert, M., Meyer, A. S., & Ghyselinck, M. (2005). Age of acquisition effects in picture naming: Evidence for a lexical-semantic competition hypothesis. Cognition, 96, B45-B54. doi:10.1016/j.cognition.2004.11.006.

    Abstract

    In many tasks the effects of frequency and age of acquisition (AoA) on reaction latencies are similar in size. However, in picture naming the AoA-effect is often significantly larger than expected on the basis of the frequency-effect. Previous explanations of this frequency-independent AoA-effect have attributed it to the organisation of the semantic system or to the way phonological word forms are stored in the mental lexicon. Using a semantic blocking paradigm, we show that semantic context effects on naming latencies are more pronounced for late-acquired than for early-acquired words. This interaction between AoA and naming context is likely to arise during lexical-semantic encoding, which we put forward as the locus for the frequency-independent AoA-effect.
  • Belke, E., Meyer, A. S., & Damian, M. F. (2005). Refractory effects in picture naming as assessed in a semantic blocking paradigm. The Quarterly Journal of Experimental Psychology Section A, 58, 667-692. doi:10.1080/02724980443000142.

    Abstract

    In the cyclic semantic blocking paradigm participants repeatedly name sets of objects with semantically related names (homogeneous sets) or unrelated names (heterogeneous sets). The naming latencies are typically longer in related than in unrelated sets. In we replicated this semantic blocking effect and demonstrated that the effect only arose after all objects of a set had been shown and named once. In , the objects of a set were presented simultaneously (instead of on successive trials). Evidence for semantic blocking was found in the naming latencies and in the gaze durations for the objects, which were longer in homogeneous than in heterogeneous sets. For the gaze-to-speech lag between the offset of gaze on an object and the onset of the articulation of its name, a repetition priming effect was obtained but no blocking effect. showed that the blocking effect for speech onset latencies generalized to new, previously unnamed lexical items. We propose that the blocking effect is due to refractory behaviour in the semantic system.
  • Belke, E., & Meyer, A. S. (2002). Tracking the time course of multidimensional stimulus discrimination: Analyses of viewing patterns and processing times during "same''-"different'' decisions. European Journal of Cognitive Psychology, 14(2), 237-266. doi:10.1080/09541440143000050.

    Abstract

    We investigated the time course of conjunctive ''same''-''different'' judgements for visually presented object pairs by means of combined reaction time and on-line eye movement measurements. The analyses of viewing patterns, viewing times, and reaction times showed that participants engaged in a parallel self-terminating search for differences. In addition, the results obtained for objects differing in only one dimension suggest that processing times may depend on the relative codability of the stimulus dimensions. The results are reviewed in a broader framework in view of higher-order processes. We propose that overspecifications of colour, often found in object descriptions, may have an ''early'' visual rather than a ''late'' linguistic origin. In a parallel assessment of the detection materials, participants overspecified the objects' colour substantially more often than their size. We argue that referential overspecifications of colour are largely attributable to mechanisms of visual discrimination.
  • Benazzo, S., Dimroth, C., Perdue, C., & Watorek, M. (2004). Le rôle des particules additives dans la construction de la cohésion discursive en langue maternelle et en langue étrangère. Langages, 155, 76-106.

    Abstract

    We compare the use of additive particles such as aussi ('also'), encore ('again, still'), and their 'translation équivalents', in a narrative task based on a séries of piclures performed by groups of children aged 4 years, 7 years and 10 years using their first language (L1 French, German, Polish), and by adult Polish and German learners of French as a second language (L2). From the cross-sectional analysis we propose developmental patterns which show remarkable similarities for ail types of learner, but which stem from différent determining factors. For the children, the patterns can best be explained by the development of their capacity to use available items in appropriate discourse contexts; for the adults, the limitations of their linguistic répertoire at différent levels of achievement détermines the possibility of incorporating thèse items into their utterance structure. Fïnally, we discuss to what extent thèse gênerai tendencies are influenced by the specificities of the différent languages used.
  • Bercelli, F., Viaro, M., & Rossano, F. (2004). Attività in alcuni generi di psicoterapia. Rivista di psicolinguistica applicata, IV (2/3), 111-127. doi:10.1400/19208.

    Abstract

    The main aim of our paper is to contribute to the outline of a general inventory of activities in psychotherapy, as a step towards a description of overall conversational organizations of diff erent therapeutic approaches. From the perspective of Conversation Analysis, we describe some activities commonly occurrring in a corpus of sessions conducted by cognitive and relational-systemic therapists. Two activities appear to be basic: (a) inquiry: therapists elicit information from patients on their problems and circumstances; (b) reworking: therapists say something designed as an elaboration of what patients have previously said, or as something that can be grounded on it; and patients are induced to confi rm/disprove and contribute to the elaboration. Furthermore, we describe other activities, which turn out to be auxiliary to the basic ones: storytelling, procedural arrangement, recalling, noticing, teaching. We fi nally show some ways in which these activities can be integrated through conversational interaction.
  • Bien, H., Levelt, W. J. M., & Baayen, R. H. (2005). Frequency effects in compound production. Proceedings of the National Academy of Sciences of the United States of America, 102(49), 17876-17881.

    Abstract

    Four experiments investigated the role of frequency information in compound production by independently varying the frequencies of the first and second constituent as well as the frequency of the compound itself. Pairs of Dutch noun-noun compounds were selected such that there was a maximal contrast for one frequency while matching the other two frequencies. In a position-response association task, participants first learned to associate a compound with a visually marked position on a computer screen. In the test phase, participants had to produce the associated compound in response to the appearance of the position mark, and we measured speech onset latencies. The compound production latencies varied significantly according to factorial contrasts in the frequencies of both constituting morphemes but not according to a factorial contrast in compound frequency, providing further evidence for decompositional models of speech production. In a stepwise regression analysis of the joint data of Experiments 1-4, however, compound frequency was a significant nonlinear predictor, with facilitation in the low-frequency range and a trend toward inhibition in the high-frequency range. Furthermore, a combination of structural measures of constituent frequencies and entropies explained significantly more variance than a strict decompositional model, including cumulative root frequency as the only measure of constituent frequency, suggesting a role for paradigmatic relations in the mental lexicon.
  • Bierwisch, M. (1997). Universal Grammar and the Basic Variety. Second Language Research, 13(4), 348-366. doi:10.1177/026765839701300403.

    Abstract

    The Basic Variety (BV) as conceived by Klein and Perdue (K&P) is a relatively stable state in the process of spontaneous (adult) second language acquisition, characterized by a small set of phrasal, semantic and pragmatic principles. These principles are derived by inductive generalization from a fairly large body of data. They are considered by K&P as roughly equivalent to those of Universal Grammar (UG) in the sense of Chomsky's Minimalist Program, with the proviso that the BV allows for only weak (or unmarked) formal features. The present article first discusses the viability of the BV principles proposed by K&P, arguing that some of them are in need of clarification with learner varieties, and that they are, in any case, not likely to be part of UG, as they exclude phenomena (e.g., so-called psych verbs) that cannot be ruled out even from the core of natural language. The article also considers the proposal that learner varieties of the BV type are completely unmarked instantiations of UG. Putting aside problems arising from the Minimalist Program, especially the question whether a grammar with only weak features would be a factual possibility and what it would look like, it is argued that the BV as characterized by K&P must be considered as the result of a process that crucially differs from first language acquisition as furnished by UG for a number of reasons, including properties of the BV itself. As a matter of fact, several of the properties claimed for the BV by K&P are more likely the result of general learning strategies than of language-specific principles. If this is correct, the characterization of the BV is a fairly interesting result, albeit of a rather different type than K&P suggest.
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Bohnemeyer, J. (2002). [Review of the book Explorations in linguistic relativity ed. by Martin Pütz and Marjolijn H. Verspoor]. Language in Society, 31(3), 452-456. doi:DOI: 10.1017.S004740502020316502020316.
  • Bonte, M. L., Mitterer, H., Zellagui, N., Poelmans, H., & Blomert, L. (2005). Auditory cortical tuning to statistical regularities in phonology. Clinical Neurophysiology, 16(12), 2765-2774. doi:10.1016/j.clinph.2005.08.012.

    Abstract

    Objective: Ample behavioral evidence suggests that distributional properties of the language environment influence the processing of speech. Yet, how these characteristics are reflected in neural processes remains largely unknown. The present ERP study investigates neurophysiological correlates of phonotactic probability: the distributional frequency of phoneme combinations. Methods: We employed an ERP measure indicative of experience-dependent auditory memory traces, the mismatch negativity (MMN). We presented pairs of non-words that differed by the degree of phonotactic probability in a codified passive oddball design that minimizes the contribution of acoustic processes. Results: In Experiment 1 the non-word with high phonotactic probability (notsel) elicited a significantly enhanced MMN as compared to the non-word with low phonotactic probability (notkel). In Experiment 2 this finding was replicated with a non-word pair with a smaller acoustic difference (notsel–notfel). An MMN enhancement was not observed in a third acoustic control experiment with stimuli having comparable phonotactic probability (so–fo). Conclusions: Our data suggest that auditory cortical responses to phoneme clusters are modulated by statistical regularities of phoneme combinations. Significance: This study indicates that the language environment is relevant in shaping the neural processing of speech. Furthermore, it provides a potentially useful design for investigating implicit phonological processing in children with anomalous language functions like dyslexia.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2005). Onset entropy matters: Letter-to-phoneme mappings in seven languages. Reading and Writing, 18, 211-229. doi:10.1007/s11145-005-3001-9.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Broeder, D., Brugman, H., & Senft, G. (2005). Documentation of languages and archiving of language data at the Max Planck Institute for Psycholinguistics in Nijmegen. Linguistische Berichte, no. 201, 89-103.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broersma, M. (2005). Perception of familiar contrasts in unfamiliar positions. Journal of the Acoustical Society of America, 117(6), 3890-3901. doi:10.1121/1.1906060.
  • Brown, P. (2005). What does it mean to learn the meaning of words? [Review of the book How children learn the meanings of words by Paul Bloom]. Journal of the Learning Sciences, 14(2), 293-300. doi:10.1207/s15327809jls1402_6.
  • Brown, A. (2005). [Review of the book The resilience of language: What gesture creation in deaf children can tell us about how all children learn language by Susan Goldin-Meadow]. Linguistics, 43(3), 662-666.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Cho, T., & McQueen, J. M. (2005). Prosodic influences on consonant production in Dutch: Effects of prosodic boundaries, phrasal accent and lexical stress. Journal of Phonetics, 33(2), 121-157. doi:10.1016/j.wocn.2005.01.001.

    Abstract

    Prosodic influences on phonetic realizations of four Dutch consonants (/t d s z/) were examined. Sentences were constructed containing these consonants in word-initial position; the factors lexical stress, phrasal accent and prosodic boundary were manipulated between sentences. Eleven Dutch speakers read these sentences aloud. The patterns found in acoustic measurements of these utterances (e.g., voice onset time (VOT), consonant duration, voicing during closure, spectral center of gravity, burst energy) indicate that the low-level phonetic implementation of all four consonants is modulated by prosodic structure. Boundary effects on domain-initial segments were observed in stressed and unstressed syllables, extending previous findings which have been on stressed syllables alone. Three aspects of the data are highlighted. First, shorter VOTs were found for /t/ in prosodically stronger locations (stressed, accented and domain-initial), as opposed to longer VOTs in these positions in English. This suggests that prosodically driven phonetic realization is bounded by language-specific constraints on how phonetic features are specified with phonetic content: Shortened VOT in Dutch reflects enhancement of the phonetic feature {−spread glottis}, while lengthened VOT in English reflects enhancement of {+spread glottis}. Prosodic strengthening therefore appears to operate primarily at the phonetic level, such that prosodically driven enhancement of phonological contrast is determined by phonetic implementation of these (language-specific) phonetic features. Second, an accent effect was observed in stressed and unstressed syllables, and was independent of prosodic boundary size. The domain of accentuation in Dutch is thus larger than the foot. Third, within a prosodic category consisting of those utterances with a boundary tone but no pause, tokens with syntactically defined Phonological Phrase boundaries could be differentiated from the other tokens. This syntactic influence on prosodic phrasing implies the existence of an intermediate-level phrase in the prosodic hierarchy of Dutch.
  • Cho, T. (2005). Prosodic strengthening and featural enhancement: Evidence from acoustic and articulatory realizations of /a,i/ in English. Journal of the Acoustical Society of America, 117(6), 3867-3878. doi:10.1121/1.1861893.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. doi:10.1006/jpho.2001.0153.

    Abstract

    This study examines acoustic and aerodynamic characteristics of consonants in standard Korean and in Cheju, an endangered Korean language. The focus is on the well-known three-way distinction among voiceless stops (i.e., lenis, fortis, aspirated) and the two-way distinction between the voiceless fricatives /s/ and /s*/. While such a typologically unusual contrast among voiceless stops has long drawn the attention of phoneticians and phonologists, there is no single work in the literature that discusses a body of data representing a relatively large number of speakers. This study reports a variety of acoustic and aerodynamic measures obtained from 12 Korean speakers (four speakers of Seoul Korean and eight speakers of Cheju). Results show that, in addition to findings similar to those reported by others, there are three crucial points worth noting. Firstly, lenis, fortis, and aspirated stops are systematically differentiated from each other by the voice quality of the following vowel. Secondly, these stops are also differentiated by aerodynamic mechanisms. The aspirated and fortis stops are similar in supralaryngeal articulation, but employ a different relation between intraoral pressure and flow. Thirdly, our study suggests that the fricative /s/ is better categorized as “lenis” rather than “aspirated”. The paper concludes with a discussion of the implications of Korean data for theories of the voicing contrast and their phonological representations.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Coombs, P. J., Graham, S. A., Drickamer, K., & Taylor, M. E. (2005). Selective binding of the scavenger receptor C-type lectin to Lewisx trisaccharide and related glycan ligands. The Journal of Biological Chemistry, 280, 22993-22999. doi:10.1074/jbc.M504197200.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is an endothelial receptor that is similar in organization to type A scavenger receptors for modified low density lipoproteins but contains a C-type carbohydrate-recognition domain (CRD). Fragments of the receptor consisting of the entire extracellular domain and the CRD have been expressed and characterized. The extracellular domain is a trimer held together by collagen-like and coiled-coil domains adjacent to the CRD. The amino acid sequence of the CRD is very similar to the CRD of the asialoglycoprotein receptor and other galactose-specific receptors, but SRCL binds selectively to asialo-orosomucoid rather than generally to asialoglycoproteins. Screening of a glycan array and further quantitative binding studies indicate that this selectivity results from high affinity binding to glycans bearing the Lewis(x) trisaccharide. Thus, SRCL shares with the dendritic cell receptor DC-SIGN the ability to bind the Lewis(x) epitope. However, it does so in a fundamentally different way, making a primary binding interaction with the galactose moiety of the glycan rather than the fucose residue. SRCL shares with the asialoglycoprotein receptor the ability to mediate endocytosis and degradation of glycoprotein ligands. These studies suggest that SRCL might be involved in selective clearance of specific desialylated glycoproteins from circulation and/or interaction of cells bearing Lewis(x)-type structures with the vascular endothelium.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Cronin, K. A., Kurian, A. V., & Snowdon, C. T. (2005). Cooperative problem solving in a cooperatively breeding primate. Animal Behaviour, 69, 133-142. doi:10.1016/j.anbehav.2004.02.024.

    Abstract

    We investigated cooperative problem solving in unrelated pairs of the cooperatively breeding cottontop tamarin, Saguinus oedipus, to assess the cognitive basis of cooperative behaviour in this species and to compare abilities with other apes and monkeys. A transparent apparatus was used that required extension of two handles at opposite ends of the apparatus for access to rewards. Resistance was applied to both handles so that two tamarins had to act simultaneously in order to receive rewards. In contrast to several previous studies of cooperation, both tamarins received rewards as a result of simultaneous pulling. The results from two experiments indicated that the cottontop tamarins (1) had a much higher success rate and efficiency of pulling than many of the other species previously studied, (2) adjusted pulling behaviour to the presence or absence of a partner, and (3) spontaneously developed sustained pulling techniques to solve the task. These findings suggest that cottontop tamarins understand the role of the partner in this cooperative task, a cognitive ability widely ascribed only to great apes. The cooperative social system of tamarins, the intuitive design of the apparatus, and the provision of rewards to both participants may explain the performance of the tamarins.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Den Os, E., & Boves, L. (2002). BabelWeb project develops multilingual guidelines. Multilingual Computing and Technologies, 13(1), 33-36.

    Abstract

    European cooperative effort seeks best practices architecture and procedures for international sites
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dimroth, C. (2002). Topics, assertions and additive words: How L2 learners get from information structure to target-language syntax. Linguistics, 40(4), 891-923. doi:10.1515/ling.2002.033.

    Abstract

    The article compares the integration of topic-related additive words at different stages of untutored L2 acquisition. Data stem from an ‘‘additive-elicitation task’’ that was designed in order to capture topic-related additive words in a context that is at the same time controlled for the underlying information structure and nondeviant from other kinds of narrative discourse. We relate the distinction between stressed and nonstressed forms of the German scope particles and adverbials auch ‘also’, noch ‘another’, wieder ‘again’, and immer noch ‘still’ to a uniform, information-structure-based principle: the stressed variants have scope over the topic information of the relevant utterances. It is then the common function of these additive words to express the additive link between the topic of the present utterance and some previous topic for which the same state of affairs is claimed to hold. This phenomenon has often been referred to as ‘‘contrastive topic,’’ but contrary to what this term suggests, these topic elements are by no means deviant from the default in coherent discourse. In the underlying information structure, the validity of some given state of affairs for the present topic must be under discussion. Topic-related additive words then express that the state of affairs indeed applies to this topic, their function therefore coming close to the function of assertion marking. While this functional correspondence goes along with the formal organization of the basic stages of untutored second-language acquisition, its expression brings linguistic constraints into conflict when the acquisition of finiteness pushes learners to reorganize their utterances according to target-language syntax.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Lasser, I. (Eds.). (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness [Special Issue]. Linguistics, 40(4).
  • Dimroth, C., & Lasser, I. (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness. Linguistics, 40(4), 647-651. doi:10.1515/ling.2002.027.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Dunn, M., Terrill, A., Reesink, G., Foley, R. A., & Levinson, S. C. (2005). Structural phylogenetics and the reconstruction of ancient language history. Science, 309(5743), 2072-2075. doi:10.1126/science.1114615.
  • Dunn, M., Reesink, G., & Terrill, A. (2002). The East Papuan languages: A preliminary typological appraisal. Oceanic Linguistics, 41(1), 28-62.

    Abstract

    This paper examines the Papuan languages of Island Melanesia, with a view to considering their typological similarities and differences. The East Papuan languages are thought to be the descendants of the languages spoken by the original inhabitants of Island Melanesia, who arrived in the area up to 50,000 years ago. The Oceanic Austronesian languages are thought to have come into the area with the Lapita peoples 3,500 years ago. With this historical backdrop in view, our paper seeks to investigate the linguistic relationships between the scattered Papuan languages of Island Melanesia. To do this, we survey various structural features, including syntactic patterns such as constituent order in clauses and noun phrases and other features of clause structure, paradigmatic structures of pronouns, and the structure of verbal morphology. In particular, we seek to discern similarities between the languages that might call for closer investigation, with a view to establishing genetic relatedness between some or all of the languages. In addition, in examining structural relationships between languages, we aim to discover whether it is possible to distinguish between original Papuan elements and diffused Austronesian elements of these languages. As this is a vast task, our paper aims merely to lay the groundwork for investigation into these and related questions.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eisner, F., & McQueen, J. M. (2005). The specificity of perceptual learning in speech processing. Perception & Psychophysics, 67(2), 224-238.

    Abstract

    We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, “chicory”; witlos is meaningless) subsequently categorized more sounds on an [εf]–[εs] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.
  • Enard, W., Przeworski, M., Fisher, S. E., Lai, C. S. L., Wiebe, V., Kitano, T., Pääbo, S., & Monaco, A. P. (2002). Molecular evolution of FOXP2, a gene involved in speech and language [Letters to Nature]. Nature, 418, 869-872. doi:10.1038/nature01025.

    Abstract

    Language is a uniquely human trait likely to have been a prerequisite for the development of human culture. The ability to develop articulate speech relies on capabilities, such as fine control of the larynx and mouth, that are absent in chimpanzees and other great apes. FOXP2 is the first gene relevant to the human ability to develop language. A point mutation in FOXP2 co-segregates with a disorder in a family in which half of the members have severe articulation difficulties accompanied by linguistic and grammatical impairment. This gene is disrupted by translocation in an unrelated individual who has a similar disorder. Thus, two functional copies of FOXP2 seem to be required for acquisition of normal spoken language. We sequenced the complementary DNAs that encode the FOXP2 protein in the chimpanzee, gorilla, orang-utan, rhesus macaque and mouse, and compared them with the human cDNA. We also investigated intraspecific variation of the human FOXP2 gene. Here we show that human FOXP2 contains changes in amino-acid coding and a pattern of nucleotide polymorphism, which strongly suggest that this gene has been the target of selection during recent human evolution.
  • Enfield, N. J. (2002). Semantic analysis of body parts in emotion terminology: Avoiding the exoticisms of 'obstinate monosemy' and 'online extension'. Pragmatics and Cognition, 10(1), 85-106. doi:10.1075/pc.10.12.05enf.

    Abstract

    Investigation of the emotions entails reference to words and expressions conventionally used for the description of emotion experience. Important methodological issues arise for emotion researchers, and the issues are of similarly central concern in linguistic semantics more generally. I argue that superficial and/or inconsistent description of linguistic meaning can have seriously misleading results. This paper is firstly a critique of standards in emotion research for its tendency to underrate and ill-understand linguistic semantics. It is secondly a critique of standards in some approaches to linguistic semantics itself. Two major problems occur. The first is failure to distinguish between conceptually distinct meanings of single words, neglecting the well-established fact that a single phonological string can signify more than one conceptual category (i.e., that words can be polysemous). The second error involves failure to distinguish between two kinds of secondary uses of words: (1) those which are truly active “online” extensions, and (2) those which are conventionalised secondary meanings and not active (qua “extensions”) at all. These semantic considerations are crucial to conclusions one may draw about cognition and conceptualisation based on linguistic evidence.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations: Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

    Abstract

    Central to cultural, social, and conceptual life are cognitive arti-facts, the perceptible structures which populate our world and mediate our navigation of it, complementing, enhancing, and altering available affordances for the problem-solving challenges of everyday life. Much work in this domain has concentrated on technological artifacts, especially manual tools and devices and the conceptual and communicative tools of literacy and diagrams. Recent research on hand gestures and other bodily movements which occur during speech shows that the human body serves a number of the functions of "cognitive technologies," affording the special cognitive advantages claimed to be associated exclusively with enduring (e.g., printed or drawn) diagrammatic representations. The issue is explored with reference to extensive data from video-recorded interviews with speakers of Lao in Vientiane, Laos, which show integration of verbal descriptions with complex spatial representations akin to diagrams. The study has implications both for research on cognitive artifacts (namely, that the body is a visuospatial representational resource not to be overlooked) and for research on ethnogenealogical knowledge (namely, that hand gestures reveal speakers' conceptualizations of kinship structure which are of a different nature to and not necessarily retrievable from the accompanying linguistic code).
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.

Share this page