Publications

Displaying 1 - 100 of 590
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Abdel Rahman, R., Sommer, W., & Schweinberger, S. R. (2002). Brain potential evidence for the time course of access to biographical facts and names of familiar persons. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(2), 366-373. doi:10.1037//0278-7393.28.2.366.

    Abstract

    On seeing familiar persons, biographical (semantic) information is typically retrieved faster and more accurately than name information. Serial stage models explain this pattern by suggesting that access to the name follows the retrieval of semantic information. In contrast, interactive activation and competition (IAC) models hold that both processes start together but name retrieval is slower because of structural peculiarities. With a 2-choice go/no-go procedure based on a semantic and a name-related classification, the authors tested differential predictions of the 2 alternative models for reaction times (RTs) and lateralized readiness potentials (LRP). Both LRP (Experiment 1) and RT (Experiment 2) results are in line with IAC models of face identification and naming.
  • Adank, P., Smits, R., & Van Hout, R. (2004). A comparison of vowel normalization procedures for language variation research. Journal of the Acoustical Society of America, 116(5), 3099-3109. doi:10.1121/1.1795335.

    Abstract

    An evaluation of vowel normalization procedures for the purpose of studying language variation is presented. The procedures were compared on how effectively they (a) preserve phonemic information, (b) preserve information about the talker's regional background (or sociolinguistic information), and (c) minimize anatomical/physiological variation in acoustic representations of vowels. Recordings were made for 80 female talkers and 80 male talkers of Dutch. These talkers were stratified according to their gender and regional background. The normalization procedures were applied to measurements of the fundamental frequency and the first three formant frequencies for a large set of vowel tokens. The normalization procedures were evaluated through statistical pattern analysis. The results show that normalization procedures that use information across multiple vowels ("vowel-extrinsic" information) to normalize a single vowel token performed better than those that include only information contained in the vowel token itself ("vowel-intrinsic" information). Furthermore, the results show that normalization procedures that operate on individual formants performed better than those that use information across multiple formants (e.g., "formant-extrinsic" F2-F1).
  • Adank, P., Van Hout, R., & Smits, R. (2004). An acoustic description of the vowels of Northern and Southern Standard Dutch. Journal of the Acoustical Society of America, 116(3), 1729-1738. doi:10.1121/1.1779271.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Allen, G. L., Kirasic, K. C., Rashotte, M. A., & Haun, D. B. M. (2004). Aging and path integration skill: Kinesthetic and vestibular contributions to wayfinding. Perception & Psychophysics, 66(1), 170-179.

    Abstract

    In a triangle completion task designed to assess path integration skill, younger and older adults performed
    similarly after being led, while blindfolded, along the route segments on foot, which provided both kinesthetic and vestibular information about the outbound path. In contrast, older adults’ performance was impaired, relative to that of younger adults, after they were conveyed, while blindfolded,
    along the route segments in a wheelchair, which limited them principally to vestibular information.
    Correlational evidence suggested that cognitive resources were significant factors in accounting for age-related decline in path integration performance.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Tomasello, M. (2006). Comparing different accounts of inversion errors in children's non-subject wh-questions: ‘What experimental data can tell us?’. Journal of Child Language, 33(3), 519-557. doi:10.1017/S0305000906007513.

    Abstract

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words (what, who, how and why), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6–4;6. Rates of non-inversion error (Who she is hitting?) were found not to differ by wh-word, auxiliary or number alone, but by lexical auxiliary subtype and by wh-word+lexical auxiliary combination. This finding counts against simple rule-based accounts of question acquisition that include no role for the lexical subtype of the auxiliary, and suggests that children may initially acquire wh-word+lexical auxiliary combinations from the input. For DO questions, auxiliary-doubling errors (What does she does like?) were also observed, although previous research has found that such errors are virtually non-existent for positive questions. Possible reasons for this discrepancy are discussed.
  • Ameka, F. K. (1987). A comparative analysis of linguistic routines in two languages: English and Ewe. Journal of Pragmatics, 11(3), 299-326. doi:10.1016/0378-2166(87)90135-4.

    Abstract

    It is very widely acknowledged that linguistic routines are not only embodiments of the sociocultural values of speech communities that use them, but their knowledge and appropriate use also form an essential part of a speaker's communicative/pragmatic competence. Despite this, many studies concentrate more on describing the use of routines rather than explaining the socio-cultural aspects of their meaning and the way they affect their use. It is the contention of this paper that there is the need to go beyond descriptions to explanations and explications of the use and meaning of routines that are culturally and socially revealing. This view is illustrated by a comparative analysis of functionally equivalent formulaic expressions in English and Ewe. The similarities are noted and the differences explained in terms of the socio-cultural traditions associated with the respective languages. It is argued that insights gained from such studies are valuable for crosscultural understanding and communication as well as for second language pedagogy.
  • Ameka, F. K. (1989). [Review of The case for lexicase: An outline of lexicase grammatical theory by Stanley Starosta]. Studies in Language, 13(2), 506-518.
  • Ameka, F. K. (2002). Cultural scripting of body parts for emotions: On 'jealousy' and related emotions in Ewe. Pragmatics and Cognition, 10(1-2), 27-55. doi:10.1075/pc.10.12.03ame.

    Abstract

    Different languages present a variety of ways of talking about emotional experience. Very commonly, feelings are described through the use of ‘body image constructions’ in which they are associated with processes in, or states of, specific body parts. The emotions and the body parts that are thought to be their locus and the kind of activity associated with these body parts vary cross-culturally. This study focuses on the meaning of three ‘body image constructions’ used to describe feelings similar to, but also different from, English ‘jealousy’, ‘envy’, and ‘covetousness’ in the West African language Ewe. It is demonstrated that a ‘moving body’, a pychologised eye, and red eyes are scripted for these feelings. It is argued that the expressions are not figurative and that their semantics provide good clues to understanding the cultural construction of emotions both emotions and the body.
  • Ameka, F. K., & Breedveld, A. (2004). Areal cultural scripts for social interaction in West African communities. Intercultural Pragmatics, 1(2), 167-187. doi:10.1515/iprg.2004.1.2.167.

    Abstract

    Ways of interacting and not interacting in human societies have social, cognitive and cultural dimensions. These various aspects may be reflected in particular in relation to “taboos”. They reflect the ways of thinking and the values of a society. They are recognized as part of the communicative competence of the speakers and are learned in socialization. Some salient taboos are likely to be named in the language of the relevant society, others may not have a name. Interactional taboos can be specific to a cultural linguistic group or they may be shared across different communities that belong to a ‘speech area’ (Hymes 1972). In this article we describe a number of unnamed norms of communicative conduct which are widespread in West Africa such as the taboos on the use of the left hand in social interaction and on the use of personal names in adult address, and the widespread preference for the use of intermediaries for serious communication. We also examine a named avoidance (yaage) behavior specific to the Fulbe, a nomadic cattle-herding group spread from West Africa across the Sahel as far as Sudan. We show how tacit knowledge about these taboos and other interactive norms can be captured using the cultural scripts methodology.
  • Ameka, F. K. (2004). Grammar and cultural practices: The grammaticalization of triadic communication in West African languages. The Journal of West African Languages, 30(2), 5-28.
  • Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition of monosyllabic monomorphemic words. Journal of Memory and Language, 55(2), 290-313. doi:10.1016/j.jml.2006.03.008.

    Abstract

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. Journal of Experimental Psychology: General, 133, 283–316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of monosyllabic, morphologically simple words. The present study supplements their work by making use of more flexible regression techniques that are better suited for dealing with collinearity and non-linearity, and by documenting the contributions of several variables that they did not take into account. In particular, we included measures of morphological connectivity, as well as a new frequency count, the frequency of a word in speech rather than in writing. The morphological measures emerged as strong predictors in visual lexical decision, but not in naming, providing evidence for the importance of morphological connectivity even for the recognition of morphologically simple words. Spoken frequency was predictive not only for naming but also for visual lexical decision. In addition, it co-determined subjective frequency estimates and norms for age of acquisition. Finally, we show that frequency predominantly reflects conceptual familiarity rather than familiarity with a word’s form.
  • Bastiaansen, M. C. M., Van Berkum, J. J. A., & Hagoort, P. (2002). Syntactic processing modulates the θ rhythm of the human EEG. NeuroImage, 17, 1479-1492. doi:10.1006/nimg.2002.1275.

    Abstract

    Changes in oscillatory brain dynamics can be studied by means of induced band power (IBP) analyses, which quantify event-related changes in amplitude of frequency-specific EEG rhythms. Such analyses capture EEG phenomena that are not part of traditional event-related potential measures. The present study investigated whether IBP changes in the δ, θ, and α frequency ranges are sensitive to syntactic violations in sentences. Subjects read sentences that either were correct or contained a syntactic violation. The violations were either grammatical gender agreement violations, where a prenominal adjective was not appropriately inflected for the head noun's gender, or number agreement violations, in which a plural quantifier was combined with a singular head noun. IBP changes of the concurrently measured EEG were computed in five frequency bands of 2-Hz width, individually adjusted on the basis of subjects' α peak, ranging approximately from 2 to 12 Hz. Words constituting a syntactic violation elicited larger increases in θ power than the same words in a correct sentence context, in an interval of 300–500 ms after word onset. Of all the frequency bands studied, this was true for the θ frequency band only. The scalp topography of this effect was different for different violations: following number violations a left-hemispheric dominance was found, whereas gender violations elicited a right-hemisphere dominance of the θ power increase. Possible interpretations of this effect are considered in closing.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bastiaansen, M. C. M., Posthuma, D., Groot, P. F. C., & De Geus, E. J. C. (2002). Event-related alpha and theta responses in a visuo-spatial working memory task. Clinical Neurophysiology, 113(12), 1882-1893. doi:10.1016/S1388-2457(02)00303-6.

    Abstract

    Objective: To explore the reactivity of the theta and alpha rhythms during visuo-spatial working memory. Methods: One hundred and seventy-four subjects performed a delayed response task. They had to remember the spatial location of a target stimulus on a computer screen for a 1 or a 4 s retention interval. The target either remained visible throughout the entire interval (sensory trials) or disappeared after 150 ms (memory trials). Changes in induced band power (IBP) in the electroencephalogram (EEG) were analyzed in 4 narrow, individually adjusted frequency bands between 4 and 12 Hz. Results: After presentation of the target stimulus, a phasic power increase was found, irrespective of condition and delay interval, in the lower (roughly, 4–8 Hz) frequency bands, with a posterior maximum. During the retention interval, sustained occipital–parietal alpha power increase and frontal theta power decrease were found. Most importantly, the memory trials showed larger IBP decreases in the theta band over frontal electrodes than the sensory trials. Conclusions: The phasic power increase following target onset is interpreted to reflect encoding of the target location. The sustained theta decrease, which is larger for memory trials, is tentatively interpreted to reflect visuo-spatial working memory processes.
  • Bastiaansen, M. C. M., Van Berkum, J. J. A., & Hagoort, P. (2002). Event-related theta power increases in the human EEG during online sentence processing. Neuroscience Letters, 323(1), 13-16. doi:10.1016/S0304-3940(01)02535-6.

    Abstract

    By analyzing event-related changes in induced band power in narrow frequency bands of the human electroencephalograph, the present paper explores a possible functional role of the alpha and theta rhythms during the processing of words and of sentences. The results show a phasic power increase in the theta frequency range, together with a phasic power decrease in the alpha frequency range, following the presentation of words in a sentence. These effects may be related to word processing, either lexical or in relation to sentence context. Most importantly, there is a slow and highly frequency-specific increase in theta power as a sentence unfolds, possibly related to the formation of an episodic memory trace, or to incremental verbal working memory load.
  • Bastiaansen, M. C. M., Böcker, K. B. E., & Brunia, C. H. M. (2002). ERD as an index of anticipatory attention? Effects of stimulus degradation. Psychophysiology, 39(1), 16-28. doi:10.1111/1469-8986.3910016.

    Abstract

    Previous research has suggested that the stimulus-preceding negativity (SPN) is largely independent of stimulus modality. In contrast, the scalp topography of the event related desynchronization (ERD) related to the anticipation of stimuli providing knowledge of results (KR) is modality dependent. These findings, combined with functional SPN research, lead to the hypothesis that anticipatory ERD reflects anticipatory attention, whereas the SPN mainly depends on the affective-motivational properties of the anticipated stimulus. To further investigate the prestimulus ERD, and compare this measure with the SPN, 12 participants performed a time-estimation task, and were informed about the quality of their time estimation by an auditory or a visual stimulus providing KR. The KR stimuli could be either intact or degraded. Auditory degraded KR stimuli were less effective than other KR stimuli in guiding subsequent behavior, and were preceded by a larger SPN. There were no effects of degradation on the SPN in the visual modality. Preceding auditory KR stimuli no ERD was present, whereas preceding visual stimuli an occipital ERD was found. However, contrary to expectation, the latter was larger preceding intact than preceding degraded stimuli. It is concluded that the data largely agree with an interpretation of the pre-KR SPN as a reflection of the anticipation of the affective-motivational value of KR stimuli, and of the prestimulus ERD as a perceptual anticipatory attention process.
  • Bauer, B. L. M. (2004). Vigesimal numerals in Romance: An Indo-European perspective. General Linguistics, 41, 21-46.
  • Bauer, B. L. M. (2004). [Review of the book Pre-Indo-European by Winfred P. Lehmann]. Journal of Indo-European Studies, 32, 146-155.
  • Bauer, B. L. M. (1987). L’évolution des structures morphologiques et syntaxiques du latin au français. Travaux de linguistique, 14-15, 95-107.
  • Bauer, B. L. M. (2002). Variability in word order: Adjectives and comparatives in Latin, Romance, and Germanic. Southwest Journal of Linguistics, 20, 19-50.
  • Baumann, H., Dirksmeyer, R., & Wittenburg, P. (2004). Long-term archiving. Language Archive Newsletter, 1(2), 3-3.
  • Belke, E., & Meyer, A. S. (2002). Tracking the time course of multidimensional stimulus discrimination: Analyses of viewing patterns and processing times during "same''-"different'' decisions. European Journal of Cognitive Psychology, 14(2), 237-266. doi:10.1080/09541440143000050.

    Abstract

    We investigated the time course of conjunctive ''same''-''different'' judgements for visually presented object pairs by means of combined reaction time and on-line eye movement measurements. The analyses of viewing patterns, viewing times, and reaction times showed that participants engaged in a parallel self-terminating search for differences. In addition, the results obtained for objects differing in only one dimension suggest that processing times may depend on the relative codability of the stimulus dimensions. The results are reviewed in a broader framework in view of higher-order processes. We propose that overspecifications of colour, often found in object descriptions, may have an ''early'' visual rather than a ''late'' linguistic origin. In a parallel assessment of the detection materials, participants overspecified the objects' colour substantially more often than their size. We argue that referential overspecifications of colour are largely attributable to mechanisms of visual discrimination.
  • Benazzo, S., Dimroth, C., Perdue, C., & Watorek, M. (2004). Le rôle des particules additives dans la construction de la cohésion discursive en langue maternelle et en langue étrangère. Langages, 155, 76-106.

    Abstract

    We compare the use of additive particles such as aussi ('also'), encore ('again, still'), and their 'translation équivalents', in a narrative task based on a séries of piclures performed by groups of children aged 4 years, 7 years and 10 years using their first language (L1 French, German, Polish), and by adult Polish and German learners of French as a second language (L2). From the cross-sectional analysis we propose developmental patterns which show remarkable similarities for ail types of learner, but which stem from différent determining factors. For the children, the patterns can best be explained by the development of their capacity to use available items in appropriate discourse contexts; for the adults, the limitations of their linguistic répertoire at différent levels of achievement détermines the possibility of incorporating thèse items into their utterance structure. Fïnally, we discuss to what extent thèse gênerai tendencies are influenced by the specificities of the différent languages used.
  • Bercelli, F., Viaro, M., & Rossano, F. (2004). Attività in alcuni generi di psicoterapia. Rivista di psicolinguistica applicata, IV (2/3), 111-127. doi:10.1400/19208.

    Abstract

    The main aim of our paper is to contribute to the outline of a general inventory of activities in psychotherapy, as a step towards a description of overall conversational organizations of diff erent therapeutic approaches. From the perspective of Conversation Analysis, we describe some activities commonly occurrring in a corpus of sessions conducted by cognitive and relational-systemic therapists. Two activities appear to be basic: (a) inquiry: therapists elicit information from patients on their problems and circumstances; (b) reworking: therapists say something designed as an elaboration of what patients have previously said, or as something that can be grounded on it; and patients are induced to confi rm/disprove and contribute to the elaboration. Furthermore, we describe other activities, which turn out to be auxiliary to the basic ones: storytelling, procedural arrangement, recalling, noticing, teaching. We fi nally show some ways in which these activities can be integrated through conversational interaction.
  • Bock, K., Butterfield, S., Cutler, A., Cutting, J. C., Eberhard, K. M., & Humphreys, K. R. (2006). Number agreement in British and American English: Disagreeing to agree collectively. Language, 82(1), 64-113.

    Abstract

    British andAmerican speakers exhibit different verb number agreement patterns when sentence subjects have collective headnouns. From linguistic andpsycholinguistic accounts of how agreement is implemented, three alternative hypotheses can be derived to explain these differences. The hypotheses involve variations in the representation of notional number, disparities in how notional andgrammatical number are used, and inequalities in the grammatical number specifications of collective nouns. We carriedout a series of corpus analyses, production experiments, andnorming studies to test these hypotheses. The results converge to suggest that British and American speakers are equally sensitive to variations in notional number andimplement subjectverb agreement in much the same way, but are likely to differ in the lexical specifications of number for collectives. The findings support a psycholinguistic theory that explains verb and pronoun agreement within a parallel architecture of lexical andsyntactic formulation.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Bod, R., Fitz, H., & Zuidema, W. (2006). On the structural ambiguity in natural language that the neural architecture cannot deal with [Commentary]. Behavioral and Brain Sciences, 29, 71-72. doi:10.1017/S0140525X06239025.

    Abstract

    We argue that van der Velde's & de Kamps's model does not solve the binding problem but merely shifts the burden of constructing appropriate neural representations of sentence structure to unexplained preprocessing of the linguistic input. As a consequence, their model is not able to explain how various neural representations can be assigned to sentences that are structurally ambiguous.
  • Bohnemeyer, J. (2002). [Review of the book Explorations in linguistic relativity ed. by Martin Pütz and Marjolijn H. Verspoor]. Language in Society, 31(3), 452-456. doi:DOI: 10.1017.S004740502020316502020316.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Borgwaldt, S. R., Hellwig, F. M., & De Groot, A. M. B. (2004). Word-initial entropy in five langauges: Letter to sound, and sound to letter. Written Language & Literacy, 7(2), 165-184.

    Abstract

    Alphabetic orthographies show more or less ambiguous relations between spelling and sound patterns. In transparent orthographies, like Italian, the pronunciation can be predicted from the spelling and vice versa. Opaque orthographies, like English, often display unpredictable spelling–sound correspondences. In this paper we present a computational analysis of word-initial bi-directional spelling–sound correspondences for Dutch, English, French, German, and Hungarian, stated in entropy values for various grain sizes. This allows us to position the five languages on the continuum from opaque to transparent orthographies, both in spelling-to-sound and sound-to-spelling directions. The analysis is based on metrics derived from information theory, and therefore independent of any specific theory of visual word recognition as well as of any specific theoretical approach of orthography.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Braun, B. (2006). Phonetics and phonology of thematic contrast in German. Language and Speech, 49(4), 451-493.

    Abstract

    It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or topic accent) is realized differently when signaling contrast than when not. In this article, the acoustic basis for the reported impressionistic differences is investigated in terms of the scaling (height) and alignment (positioning) of tonal targets.

    Subjects read target sentences in a contrastive and a noncontrastive context (Experiment 1). Prosodic annotation revealed that thematic accents were not realized with different accent types in the two contexts but acoustic comparison showed that themes in contrastive context exhibited a higher and later peak. The alignment and scaling of accents can hence be controlled in a linguistically meaningful way, which has implications for intonational phonology. In Experiment 2, nonlinguists' perception of a subset of the production data was assessed. They had to choose whether, in a contrastive context, the presumed contrastive or noncontrastive realization of a sentence was more appropriate. For some sentence pairs only, subjects had a clear preference. For Experiment 3, a group of linguists annotated the thematic accents of the contrastive and noncontrastive versions of the same data as used in Experiment 2. There was considerable disagreement in labels, but different accent types were consistently used when the two versions differed strongly in F0 excursion. Although themes in contrastive contexts were clearly produced differently than themes in noncontrastive contexts, this difference is not easily perceived or annotated.
  • Braun, B., Kochanski, G., Grabe, E., & Rosner, B. S. (2006). Evidence for attractors in English intonation. Journal of the Acoustical Society of America, 119(6), 4006-4015. doi:10.1121/1.2195267.

    Abstract

    Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
  • Broeder, D., & Wittenburg, P. (2006). The IMDI metadata framework, its current application and future direction. International Journal of Metadata, Semantics and Ontologies, 1(2), 119-132. doi:10.1504/IJMSO.2006.011008.

    Abstract

    The IMDI Framework offers next to a suitable set of metadata descriptors for language resources, a set of tools and an infrastructure to use these. This paper gives an overview of all these aspects and at the end describes the intentions and hopes for ensuring the interoperability of the IMDI framework within more general ones in development. An evaluation of the current state of the IMDI Framework is presented with an analysis of the benefits and more problematic issues. Finally we describe work on issues of long-term stability for IMDI by linking up to the work done within the ISO TC37/SC4 subcommittee (TC37/SC4).
  • Broeder, D., Auer, E., & Wittenburg, P. (2006). Unique resource identifiers. Language Archive Newsletter, no. 8, 8-9.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broersma, M., & De Bot, K. (2006). Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and Cognition, 9(1), 1-13. doi:10.1017/S1366728905002348.

    Abstract

    In this article the triggering hypothesis for codeswitching proposed by Michael Clyne is discussed and tested. According to this hypothesis, cognates can facilitate codeswitching of directly preceding or following words. It is argued that the triggering hypothesis in its original form is incompatible with language production models, as it assumes that language choice takes place at the surface structure of utterances, while in bilingual production models language choice takes place along with lemma selection. An adjusted version of the triggering hypothesis is proposed in which triggering takes place during lemma selection and the scope of triggering is extended to basic units in language production. Data from a Dutch–Moroccan Arabic corpus are used for a statistical test of the original and the adjusted triggering theory. The codeswitching patterns found in the data support part of the original triggering hypothesis, but they are best explained by the adjusted triggering theory.
  • Brown, P. (1989). [Review of the book Language, gender, and sex in comparative perspective ed. by Susan U. Philips, Susan Steeleand Christine Tanz]. Man, 24(1), 192.
  • Brown, A. (2006). Cross-linguistic influence in first and second lanuages: Convergence in speech and gesture. PhD Thesis, Boston University, Boston.

    Abstract

    Research on second language acquisition typically focuses on how a first language (L1) influences a second language (L2) in different linguistic domains and across modalities. This dissertation, in contrast, explores interactions between languages in the mind of a language learner by asking 1) can an emerging L2 influence an established L1? 2) if so, how is such influence realized? 3) are there parallel influences of the L1 on the L2? These questions were investigated for the expression of Manner (e.g. climb, roll) and Path (e.g. up, down) of motion, areas where substantial crosslinguistic differences exist in speech and co-speech gesture. Japanese and English are typologically distinct in this domain; therefore, narrative descriptions of four motion events were elicited from monolingual Japanese speakers (n=16), monolingual English speakers (n=13), and native Japanese speakers with intermediate knowledge of English (narratives elicited in both their L1 and L2, n=28). Ways in which Path and Manner were expressed at the lexical, syntactic, and gestural levels were analyzed in monolingual and non-monolingual production. Results suggest mutual crosslinguistic influences. In their L1, native Japanese speakers with knowledge of English displayed both Japanese- and English-like use of morphosyntactic elements to express Path and Manner (i.e. a combination of verbs and other constructions). Consequently, non-monolingual L1 discourse contained significantly more Path expressions per clause, with significantly greater mention of Goal of motion than monolingual Japanese and English discourse. Furthermore, the gestures of non-monolingual speakers diverged from their monolingual counterparts with differences in depiction of Manner and gesture perspective (character versus observer). Importantly, non-monolingual production in the L1 was not ungrammatical, but simply reflected altered preferences. As for L2 production, many effects of L1 influence were seen, crucially in areas parallel to those described above. Overall, production by native Japanese speakers who knew English differed from that of monolingual Japanese and English speakers. But L1 and L2 production within non-monolingual individuals was similar. These findings imply a convergence of L1-L2 linguistic systems within the mind of a language learner. Theoretical and methodological implications for SLA research and language assessment with respect to the ‘native speaker standard language’ are discussed.
  • Brown, P. (2006). Language, culture and cognition: The view from space. Zeitschrift für Germanistische Linguistik, 34, 64-86.

    Abstract

    This paper addresses the vexed questions of how language relates to culture, and what kind of notion of culture is important for linguistic explanation. I first sketch five perspectives - five different construals - of culture apparent in linguistics and in cognitive science more generally. These are: (i) culture as ethno-linguistic group, (ii) culture as a mental module, (iii) culture as knowledge, (iv) culture as context, and (v) culture as a process emergent in interaction. I then present my own work on spatial language and cognition in a Mayan languge and culture, to explain why I believe a concept of culture is important for linguistics. I argue for a core role for cultural explanation in two domains: in analysing the semantics of words embedded in cultural practices which color their meanings (in this case, spatial frames of reference), and in characterizing thematic and functional links across different domains in the social and semiotic life of a particular group of people.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Cablitz, G. (2002). Marquesan: A grammar of space. PhD Thesis, Christian Albrechts U., Kiel.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., Jun, S.-A., & Ladefoged, P. (2002). Acoustic and aerodynamic correlates of Korean stops and fricatives. Journal of Phonetics, 30(2), 193-228. doi:10.1006/jpho.2001.0153.

    Abstract

    This study examines acoustic and aerodynamic characteristics of consonants in standard Korean and in Cheju, an endangered Korean language. The focus is on the well-known three-way distinction among voiceless stops (i.e., lenis, fortis, aspirated) and the two-way distinction between the voiceless fricatives /s/ and /s*/. While such a typologically unusual contrast among voiceless stops has long drawn the attention of phoneticians and phonologists, there is no single work in the literature that discusses a body of data representing a relatively large number of speakers. This study reports a variety of acoustic and aerodynamic measures obtained from 12 Korean speakers (four speakers of Seoul Korean and eight speakers of Cheju). Results show that, in addition to findings similar to those reported by others, there are three crucial points worth noting. Firstly, lenis, fortis, and aspirated stops are systematically differentiated from each other by the voice quality of the following vowel. Secondly, these stops are also differentiated by aerodynamic mechanisms. The aspirated and fortis stops are similar in supralaryngeal articulation, but employ a different relation between intraoral pressure and flow. Thirdly, our study suggests that the fricative /s/ is better categorized as “lenis” rather than “aspirated”. The paper concludes with a discussion of the implications of Korean data for theories of the voicing contrast and their phonological representations.
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.

    Abstract

    An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
  • Cutler, A. (1975). Sentence stress and sentence comprehension. PhD Thesis, University of Texas, Austin.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Den Os, E., & Boves, L. (2002). BabelWeb project develops multilingual guidelines. Multilingual Computing and Technologies, 13(1), 33-36.

    Abstract

    European cooperative effort seeks best practices architecture and procedures for international sites
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Dietrich, C. (2006). The acquisition of phonological structure: Distinguishing contrastive from non-contrastive variation. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57829.
  • Dimroth, C. (2002). Topics, assertions and additive words: How L2 learners get from information structure to target-language syntax. Linguistics, 40(4), 891-923. doi:10.1515/ling.2002.033.

    Abstract

    The article compares the integration of topic-related additive words at different stages of untutored L2 acquisition. Data stem from an ‘‘additive-elicitation task’’ that was designed in order to capture topic-related additive words in a context that is at the same time controlled for the underlying information structure and nondeviant from other kinds of narrative discourse. We relate the distinction between stressed and nonstressed forms of the German scope particles and adverbials auch ‘also’, noch ‘another’, wieder ‘again’, and immer noch ‘still’ to a uniform, information-structure-based principle: the stressed variants have scope over the topic information of the relevant utterances. It is then the common function of these additive words to express the additive link between the topic of the present utterance and some previous topic for which the same state of affairs is claimed to hold. This phenomenon has often been referred to as ‘‘contrastive topic,’’ but contrary to what this term suggests, these topic elements are by no means deviant from the default in coherent discourse. In the underlying information structure, the validity of some given state of affairs for the present topic must be under discussion. Topic-related additive words then express that the state of affairs indeed applies to this topic, their function therefore coming close to the function of assertion marking. While this functional correspondence goes along with the formal organization of the basic stages of untutored second-language acquisition, its expression brings linguistic constraints into conflict when the acquisition of finiteness pushes learners to reorganize their utterances according to target-language syntax.
  • Dimroth, C., & Lasser, I. (Eds.). (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness [Special Issue]. Linguistics, 40(4).
  • Dimroth, C., & Lasser, I. (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness. Linguistics, 40(4), 647-651. doi:10.1515/ling.2002.027.
  • Dingemanse, M. (2006). The semantics of Bantu noun classification: A review and comparison of three approaches. Master Thesis, Leiden University.
  • Dingemanse, M. (2006). The body in Yoruba: A linguistic study. Master Thesis, Leiden University, Leiden.

Share this page