Publications

Displaying 1 - 100 of 134
  • Abma, R., Breeuwsma, G., & Poletiek, F. H. (2001). Toetsen in het onderwijs. De Psycholoog, 36, 638-639.
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Brunia, C. H. M., De Munck, J. C., & Spekreijse, H. (2001). Desynchronization during anticipatory attention for an upcoming stimulus: A comparative EEG/MEG study. Clinical Neurophysiology, 112, 393-403.

    Abstract

    Objectives: Our neurophysiological model of anticipatory behaviour (e.g. Acta Psychol 101 (1999) 213; Bastiaansen et al., 1999a) predicts an activation of (primary) sensory cortex during anticipatory attention for an upcoming stimulus. In this paper we attempt to demonstrate this by means of event-related desynchronization (ERD). Methods: Five subjects performed a time estimation task, and were informed about the quality of their time estimation by either visual or auditory stimuli providing Knowledge of Results (KR). EEG and MEG were recorded in separate sessions, and ERD was computed in the 8± 10 and 10±12 Hz frequency bands for both datasets. Results: Both in the EEG and the MEG we found an occipitally maximal ERD preceding the visual KR for all subjects. Preceding the auditory KR, no ERD was present in the EEG, whereas in the MEG we found an ERD over the temporal cortex in two of the 5 subjects. These subjects were also found to have higher levels of absolute power over temporal recording sites in the MEG than the other subjects, which we consider to be an indication of the presence of a `tau' rhythm (e.g. Neurosci Lett 222 (1997) 111). Conclusions: It is concluded that the results are in line with the predictions of our neurophysiological model.
  • Bastiaansen, M. C. M., & Brunia, C. H. M. (2001). Anticipatory attention: An event-related desynchronization approach. International Journal of Psychophysiology, 43, 91-107.

    Abstract

    This paper addresses the question of whether anticipatory attention - i.e. attention directed towards an upcoming stimulus in order to facilitate its processing - is realized at the neurophysiological level by a pre-stimulus desynchronization of the sensory cortex corresponding to the modality of the anticipated stimulus, reflecting then opening of a thalamocortical gate in the relevant sensory modality. It is argued that a technique called Event-Related Desynchronization (ERD) of rhythmic 10-Hz activity is well suited to study the thalamocortical processes that are thought to mediate anticipatory attention. In a series of experiments, ERD was computed on EEG and MEG data, recorded while subjects performed a time estimation task and were informed about the quality of their time estimation by stimuli providing Knowledge of Results (KR). The modality of the KR stimuli (auditory, visual, or somatosensory) was manipulated both within and between experiments. The results indicate to varying degrees that preceding the presentation of the KR stimuli, ERD is present over the sensory cortex, which corresponds to the modality of the KR stimulus. The general pattern of results supports the notion that a thalamocortical gating mechanism forms the neurophysiological basis of anticipatory attention. Furthermore, the results support the notion that Event-Related Potential(ERP) and ERD measures reflect fundamentally different neurophysiological processes.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bock, K., Eberhard, K. M., Cutting, J. C., Meyer, A. S., & Schriefers, H. (2001). Some attractions of verb agreement. Cognitive Psychology, 43(2), 83-128. doi:10.1006/cogp.2001.0753.

    Abstract

    In English, words like scissors are grammatically plural but conceptually singular, while words like suds are both grammatically and conceptually plural. Words like army can be construed plurally, despite being grammatically singular. To explore whether and how congruence between grammatical and conceptual number affected the production of subject-verb number agreement in English, we elicited sentence completions for complex subject noun phrases like The advertisement for the scissors. In these phrases, singular subject nouns were followed by distractor words whose grammatical and conceptual numbers varied. The incidence of plural attraction (the use of plural verbs after plural distractors) increased only when distractors were grammatically plural, and revealed no influence from the distractors' number meanings. Companion experiments in Dutch offered converging support for this account and suggested that similar agreement processes operate in that language. The findings argue for a component of agreement that is sensitive primarily to the grammatical reflections of number. Together with other results, the evidence indicates that the implementation of agreement in languages like English and Dutch involves separable processes of number marking and number morphing, in which number meaning plays different parts.

    Files private

    Request files
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Broersma, M., & De Bot, K. (2001). De triggertheorie voor codewisseling: De oorspronkelijke en een aangepaste versie (‘The trigger theory for codeswitching: The original and an adjusted version’). Toegepaste Taalwetenschap in Artikelen, 65(1), 41-54.
  • Brown, C. M., Van Berkum, J. J. A., & Hagoort, P. (2000). Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding. Journal of Psycholinguistic Research, 29(1), 53-68. doi:10.1023/A:1005172406969.

    Abstract

    A study is presented on the effects of discourse–semantic and lexical–syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior discourse–semantic information biased toward one analysis of the temporary ambiguity, whereas the lexical-syntactic information allowed only for the alternative analysis. The ERP results show that discourse–semantic information can momentarily take precedence over syntactic information, even if this violates grammatical gender agreement rules.
  • Brown, C. M., Hagoort, P., & Chwilla, D. J. (2000). An event-related brain potential analysis of visual word priming effects. Brain and Language, 72, 158-190. doi:10.1006/brln.1999.2284.

    Abstract

    Two experiments are reported that provide evidence on task-induced effects during
    visual lexical processing in a primetarget semantic priming paradigm. The research focuses on target expectancy effects by manipulating the proportion of semantically related and unrelated word pairs. In Experiment 1, a lexical decision task was used and reaction times (RTs) and event-related brain potentials (ERPs) were obtained. In Experiment 2, subjects silently read the stimuli, without any additional task demands, and ERPs were recorded. The RT and ERP results of Experiment 1 demonstrate that an expectancy mechanism contributed to the priming effect when a high proportion of related word pairs was presented. The ERP results of Experiment 2 show that in the absence of extraneous task requirements, an expectancy mechanism is not active. However, a standard ERP semantic priming effect was obtained in Experiment 2. The combined results show that priming effects due to relatedness proportion are induced by task demands and are not a standard aspect of online lexical processing.
  • Carlsson, K., Petrovic, P., Skare, S., Petersson, K. M., & Ingvar, M. (2000). Tickling expectations: Neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience, 12(4), 691-703. doi:10.1162/089892900562318.
  • Clahsen, H., Eisenbeiss, S., Hadler, M., & Sonnenstuhl, I. (2001). The Mental Representation of Inflected Words: An Experimental Study of Adjectives and Verbs in German. Language, 77(3), 510-534. doi:10.1353/lan.2001.0140.

    Abstract

    The authors investigate how morphological relationships between inflected word forms are represented in the mental lexicon, focusing on paradigmatic relations between regularly inflected word forms and relationships between different stem forms of the same lexeme. We present results from a series of psycholinguistic experiments investigating German adjectives (which are inflected for case, number, and gender) and the so-called strong verbs of German, which have different stem forms when inflected for person, number, tense, or mood. Evidence from three lexical-decision experiments indicates that regular affixes are stripped off from their stems for processing purposes. It will be shown that this holds for both unmarked and marked stem forms. Another set of experiments revealed priming effects between different paradigmatically related affixes and between different stem forms of the same lexeme. We will show that associative models of inflection do not capture these findings, and we explain our results in terms of combinatorial models of inflection in which regular affixes are represented in inflectional paradigms and stem variants are represented in structured lexical entries. We will also argue that the morphosyntactic features of stems and affixes form abstract underspecified entries. The experimental results indicate that the human language processor makes use of these representations.

    Files private

    Request files
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A. (2001). Listening to a second language through the ears of a first. Interpreting, 5, 1-23.
  • Cutler, A., & Van Donselaar, W. (2001). Voornaam is not a homophone: Lexical prosody and lexical access in Dutch. Language and Speech, 44, 171-195. doi:10.1177/00238309010440020301.

    Abstract

    Four experiments examined Dutch listeners’ use of suprasegmental information in spoken-word recognition. Isolated syllables excised from minimal stress pairs such as VOORnaam/voorNAAM could be reliably assigned to their source words. In lexical decision, no priming was observed from one member of minimal stress pairs to the other, suggesting that the pairs’ segmental ambiguity was removed by suprasegmental information.Words embedded in nonsense strings were harder to detect if the nonsense string itself formed the beginning of a competing word, but a suprasegmental mismatch to the competing word significantly reduced this inhibition. The same nonsense strings facilitated recognition of the longer words of which they constituted the beginning, butagain the facilitation was significantly reduced by suprasegmental mismatch. Together these results indicate that Dutch listeners effectively exploit suprasegmental cues in recognizing spoken words. Nonetheless, suprasegmental mismatch appears to be somewhat less effective in constraining activation than segmental mismatch.
  • Damian, M. F., Vigliocco, G., & Levelt, W. J. M. (2001). Effects of semantic context in the naming of pictures and words. Cognition, 81, B77-B86. doi:10.1016/S0010-0277(01)00135-4.

    Abstract

    Two experiments investigated whether lexical retrieval for speaking can be characterized as a competitive process by assessing the effects of semantic context on picture and word naming in German. In Experiment 1 we demonstrated that pictures are named slower in the context of same-category items than in the context of items from various semantic categories, replicating findings by Kroll and Stewart (Journal of Memory and Language, 33 (1994) 149). In Experiment 2 we used words instead of pictures. Participants either named the words in the context of same- or different-category items, or produced the words together with their corresponding determiner. While in the former condition words were named faster in the context of samecategory items than of different-category items, the opposite pattern was obtained for the latter condition. These findings confirm the claim that the interfering effect of semantic context reflects competition in the retrieval of lexical entries in speaking.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dobel, C., Pulvermüller, F., Härle, M., Cohen, R., Köbbel, P., Schönle, P. W., & Rockstroh, B. (2001). Syntactic and semantic processing in the healthy and aphasic human brain. Experimental Brain Research, 140(1), 77-85. doi:10.1007/s002210100794.

    Abstract

    A syntactic and a semantic task were per-formed by German-speaking healthy subjects and apha-sics with lesions in the dominant left hemisphere. In both
    tasks, pictures of objects were presented that had to be classified by pressing buttons. The classification was into grammatical gender in the syntactic task (masculine or feminine gender?) and into semantic category in the se-
    mantic task (man- or nature made?). Behavioral data revealed a significant Group by Task interaction, with
    aphasics showing most pronounced problems with syn-
    tax. Brain event-related potentials 300–600 ms following picture onset showed different task-dependent laterality
    patterns in the two groups. In controls, the syntax task
    induced a left-lateralized negative ERP, whereas the semantic task produced more symmetric responses over the hemispheres. The opposite was the case in the patients, where, paradoxically, stronger laterality of physio-logical brain responses emerged in the semantic task than in the syntactic task. We interpret these data based on neuro-psycholinguistic models of word processing and current theories about the roles of the hemispheres in language recovery.
  • Drude, S. (2001). Entschlüsselung einer unbekannten Indianersprache: Ein Projekt zur Dokumentation der bedrohten brasilianischen Indianersprache Awetí. Fundiert: Das Wissenschaftsmagazin der Freien Universität Berlin, 2, 112-121. Retrieved from http://www.elfenbeinturm.net/archiv/2001/lust3.html.

    Abstract

    Die Awetí sind ein kleiner Indianerstamm in Zentralbrasilien, der bislang nur wenig Kontakt mit Weißen hatte. Im Zuge eines Programms der Volkswagenstiftung zur Dokumentation bedrohter Sprachen wird unser Autor die Awetí erneut besuchen und berichtet als „jüngerer Bruder des Häuptlings“ über seine Bemühungen, die Sprache der Awetí für künftige Generationen festzuhalten.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Eggers, H., Klein, W., Rath, R., Rothkegel, A., Weber, H.-J., & Zimmermann, H. (1969). Die automatische Behandlung diskontinuierlicher Konstituenten im Deutschen. Muttersprache, 9/10, 260-266.
  • Enfield, N. J. (2001). ‘Lip-pointing’: A discussion of form and function with reference to data from Laos. Gesture, 1(2), 185-211. doi:10.1075/gest.1.2.06enf.

    Abstract

    ‘Lip-pointing’ is a widespread but little-documented form of deictic gesture, which may involve not just protruding one or both lips, but also raising the head, sticking out the chin, lifting the eyebrows, among other things. This paper discusses form and function of lip-pointing with reference to a set of examples collected on video in Laos. There are various parameters with respect to which the conventional form of a lip-pointing gesture may vary. There is also a range of ways in which lip-pointing gestures can be coordinated with other kinds of deictic gesture such as various forms of hand pointing. The attested coordinating/sequencing possibilities can be related to specific functional properties of lip-pointing among Lao speakers, particularly in the context of other forms of deictic gesture, which have different functional properties. It is argued that the ‘vector’ of lip-pointing is in fact defined by gaze, and that the lip-pointing action itself (like other kinds of ‘pointing’ involving the head area) is a ‘gaze-switch’, i.e. it indicates that the speaker is now pointing out something with his or her gaze. Finally, I consider the position of lip-pointing in the broader deictic gesture system of Lao speakers, firstly as a ‘lower register’ form, and secondly as a form of deictic gesture which may contrast with forms of hand pointing.
  • Enfield, N. J. (2001). Remarks on John Haiman, 1999. ‘Auxiliation in Khmer: the case of baan.’ Studies in Language 23:1. Studies in Language, 25(1), 115-124. doi:10.1075/sl.25.1.05enf.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Fernald, A., Swingley, D., & Pinto, J. P. (2001). When half a word is enough: infants can recognize spoken words using partial phonetic information. Child Development, 72, 1003-1015. doi:10.1111/1467-8624.00331.

    Abstract

    Adults process speech incrementally, rapidly identifying spoken words on the basis of initial phonetic information sufficient to distinguish them from alternatives. In this study, infants in the second year also made use of word-initial information to understand fluent speech. The time course of comprehension was examined by tracking infants' eye movements as they looked at pictures in response to familiar spoken words, presented both as whole words in intact form and as partial words in which only the first 300 ms of the word was heard. In Experiment 1, 21-month-old infants (N = 32) recognized partial words as quickly and reliably as they recognized whole words; in Experiment 2, these findings were replicated with 18-month-old infants (N = 32). Combining the data from both experiments, efficiency in spoken word recognition was examined in relation to level of lexical development. Infants with more than 100 words in their productive vocabulary were more accurate in identifying familiar words than were infants with less than 60 words. Grouped by response speed, infants with faster mean reaction times were more accurate in word recognition and also had larger productive vocabularies than infants with slower response latencies. These results show that infants in the second year are capable of incremental speech processing even before entering the vocabulary spurt, and that lexical growth is associated with increased speed and efficiency in understanding spoken language.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Fransson, P., Merboldt, K.-D., Ingvar, M., Petersson, K. M., & Frahm, J. (2001). Functional MRI with reduced susceptibility artifact: High-resolution mapping of episodic memory encoding. Neuroreport, 12, 1415-1420.

    Abstract

    Visual episodic memory encoding was investigated using echoplanar magnetic resonance imaging at 2.0 x 2.0 mm2 resolution and 1.0 mm section thickness, which allows for functional mapping of hippocampal, parahippocampal, and ventral occipital regions with reduced magnetic susceptibility artifact. The memory task was based on 54 image pairs each consisting of a complex visual scene and the face of one of six different photographers. A second group of subjects viewed the same set of images without memory instruction as well as a reversing checkerboard. Apart from visual activation in occipital cortical areas, episodic memory encoding revealed consistent activation in the parahippocampal gyrus but not in the hippocampus proper. This ®nding was most prominently evidenced in sagittal maps covering the right hippocampal formation. Mean activated volumes were 432±293 µl and 259±179 µl for intentional memory encoding and non-instructed viewing, respectively. In contrast, the checkerboard paradigm elicited pure visual activation without parahippocampal involvement.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Indefrey, P., Brown, C. M., Hellwig, F. M., Amunts, K., Herzog, H., Seitz, R. J., & Hagoort, P. (2001). A neural correlate of syntactic encoding during speech production. Proceedings of the National Academy of Sciences of the United States of America, 98, 5933-5936. doi:10.1073/pnas.101118098.

    Abstract

    Spoken language is one of the most compact and structured ways to convey information. The linguistic ability to structure individual words into larger sentence units permits speakers to express a nearly unlimited range of meanings. This ability is rooted in speakers’ knowledge of syntax and in the corresponding process of syntactic encoding. Syntactic encoding is highly automatized, operates largely outside of conscious awareness, and overlaps closely in time with several other processes of language production. With the use of positron emission tomography we investigated the cortical activations during spoken language production that are related to the syntactic encoding process. In the paradigm of restrictive scene description, utterances varying in complexity of syntactic encoding were elicited. Results provided evidence that the left Rolandic operculum, caudally adjacent to Broca’s area, is involved in both sentence-level and local (phrase-level) syntactic encoding during speaking.
  • Indefrey, P., Hagoort, P., Herzog, H., Seitz, R. J., & Brown, C. M. (2001). Syntactic processing in left prefrontal cortex is independent of lexical meaning. Neuroimage, 14, 546-555. doi:10.1006/nimg.2001.0867.

    Abstract

    In language comprehension a syntactic representation is built up even when the input is semantically uninterpretable. We report data on brain activation during syntactic processing, from an experiment on the detection of grammatical errors in meaningless sentences. The experimental paradigm was such that the syntactic processing was distinguished from other cognitive and linguistic functions. The data reveal that in syntactic error detection an area of the left dorsolateral prefrontal cortex, adjacent to Broca’s area, is specifically involved in the syntactic processing aspects, whereas other prefrontal areas subserve general error detection processes.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jordan, F., & Gray, R. D. (2001). Comment on Terrell, Kelly and Rainbird. Current Anthropology, 42(1), 114-115.
  • Kempen, G., & Boon van Ostade, A. (1969). Een typologie van ideaalbeelden van Europese jeugdigen door middel van de iteratieve clusteranalyse. Nederlands Tijdschrift voor de Psychologie, 24, 46-60.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., Hermans, B., Klinkum, A., Brand, M., & Verhaaren, F. (1969). The word-frequency effect and incongruity perception: Methodological artifacts? Perception and Psychophysics, 5(3), 161-162. doi:10.3758/BF03209549.

    Abstract

    Two experimental results often reported in support of perceptual interpretations concerning the influence of set on perception are critically examined: (a) the relation between word frequency and recognition threshold, and (b) the so-called compromise reactions between set and stimulus, Alter elimination of certain methodological artifacts (e.g., introduction of a temporal forced-choice method instead of the ascending-limits method), both phenomena disappear; the influence of set on perception appears to be wholly a matter of response bias.
  • Klein, W. (2001). Ein Gemeinwesen, in dem das Volk herrscht, darf nicht von Gesetzen beherrscht werden, die das Volk nicht versteht. Rechtshistorisches Journal, 20, 621-628.
  • Klein, W. (2000). An analysis of the German perfekt. Language, 76, 358-382.

    Abstract

    The German Perfekt has two quite different temporal readings, as illustrated by the two possible continuations of the sentence Peter hat gearbeitet in i, ii, respectively: (i) Peter hat gearbeitet und ist müde. Peter has worked and is tired. (ii) Peter hat gearbeitet und wollte nicht gestört werden. Peter has worked and wanted not to be disturbed. The first reading essentially corresponds to the English present perfect; the second can take a temporal adverbial with past time reference ('yesterday at five', 'when the phone rang', and so on), and an English translation would require a past tense ('Peter worked/was working'). This article shows that the Perfekt has a uniform temporal meaning that results systematically from the interaction of its three components-finiteness marking, auxiliary and past participle-and that the two readings are the consequence of a structural ambiguity. This analysis also predicts the properties of other participle constructions, in particular the passive in German.
  • Klein, W., Li, P., & Hendriks, H. (2000). Aspect and assertion in Mandarin Chinese. Natural Language & Linguistic Theory, 18, 723-770. doi:10.1023/A:1006411825993.

    Abstract

    Chinese has a number of particles such as le, guo, zai and zhe that add a particular aspectual value to the verb to which they are attached. There have been many characterisations of this value in the literature. In this paper, we review several existing influential accounts of these particles, including those in Li and Thompson (1981), Smith (1991), and Mangione and Li (1993). We argue that all these characterisations are intuitively plausible, but none of them is precise.We propose that these particles serve to mark which part of the sentence''s descriptive content is asserted, and that their aspectual value is a consequence of this function. We provide a simple and precise definition of the meanings of le, guo, zai and zhe in terms of the relationship between topic time and time of situation, and show the consequences of their interaction with different verb expressions within thisnew framework of interpretation.
  • Klein, W. (2000). Fatale Traditionen. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (120), 11-40.
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Knösche, T. R., & Bastiaansen, M. C. M. (2001). Does the Hilbert transform improve accuracy and time resolution of ERD/ERS? Biomedizinische Technik, 46(2), 106-108.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Vargha-Khadem, F., & Monaco, A. P. (2001). A forkhead-domain gene is mutated in a severe speech and language disorder[Letters to Nature]. Nature, 413, 519-523. doi:10.1038/35097076.

    Abstract

    Individuals affected with developmental disorders of speech and language have substantial difficulty acquiring expressive and/or receptive language in the absence of any profound sensory or neurological impairment and despite adequate intelligence and opportunity. Although studies of twins consistently indicate that a significant genetic component is involved, most families segregating speech and language deficits show complex patterns of inheritance, and a gene that predisposes individuals to such disorders has not been identified. We have studied a unique three-generation pedigree, KE, in which a severe speech and language disorder is transmitted as an autosomal-dominant monogenic trait. Our previous work mapped the locus responsible, SPCH1, to a 5.6-cM interval of region 7q31 on chromosome 7 (ref. 5). We also identified an unrelated individual, CS, in whom speech and language impairment is associated with a chromosomal translocation involving the SPCH1 interval. Here we show that the gene FOXP2, which encodes a putative transcription factor containing a polyglutamine tract and a forkhead DNA-binding domain, is directly disrupted by the translocation breakpoint in CS. In addition, we identify a point mutation in affected members of the KE family that alters an invariant amino-acid residue in the forkhead domain. Our findings suggest that FOXP2 is involved in the developmental process that culminates in speech and language
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Levy, E. R., Hodgson, S., Fox, M., Jeremiah, S., Povey, S., Jamison, D. C., Green, E. D., Vargha-Khadem, F., & Monaco, A. P. (2000). The SPCH1 region on human 7q31: Genomic characterization of the critical interval and localization of translocations associated with speech and language disorder. American Journal of Human Genetics, 67(2), 357-368. doi:10.1086/303011.

    Abstract

    The KE family is a large three-generation pedigree in which half the members are affected with a severe speech and language disorder that is transmitted as an autosomal dominant monogenic trait. In previously published work, we localized the gene responsible (SPCH1) to a 5.6-cM region of 7q31 between D7S2459 and D7S643. In the present study, we have employed bioinformatic analyses to assemble a detailed BAC-/PAC-based sequence map of this interval, containing 152 sequence tagged sites (STSs), 20 known genes, and >7.75 Mb of completed genomic sequence. We screened the affected chromosome 7 from the KE family with 120 of these STSs (average spacing <100 kb), but we did not detect any evidence of a microdeletion. Novel polymorphic markers were generated from the sequence and were used to further localize critical recombination breakpoints in the KE family. This allowed refinement of the SPCH1 interval to a region between new markers 013A and 330B, containing ∼6.1 Mb of completed sequence. In addition, we have studied two unrelated patients with a similar speech and language disorder, who have de novo translocations involving 7q31. Fluorescence in situ hybridization analyses with BACs/PACs from the sequence map localized the t(5;7)(q22;q31.2) breakpoint in the first patient (CS) to a single clone within the newly refined SPCH1 interval. This clone contains the CAGH44 gene, which encodes a brain-expressed protein containing a large polyglutamine stretch. However, we found that the t(2;7)(p23;q31.3) breakpoint in the second patient (BRD) resides within a BAC clone mapping >3.7 Mb distal to this, outside the current SPCH1 critical interval. Finally, we investigated the CAGH44 gene in affected individuals of the KE family, but we found no mutations in the currently known coding sequence. These studies represent further steps toward the isolation of the first gene to be implicated in the development of speech and language.
  • Ledberg, A., Fransson, P., Larsson, J., & Petersson, K. M. (2001). A 4D approach to the analysis of functional brain images: Application to fMRI data. Human Brain Mapping, 13, 185-198. doi:10.1002/hbm.1032.

    Abstract

    This paper presents a new approach to functional magnetic resonance imaging (FMRI) data analysis. The main difference lies in the view of what comprises an observation. Here we treat the data from one scanning session (comprising t volumes, say) as one observation. This is contrary to the conventional way of looking at the data where each session is treated as t different observations. Thus instead of viewing the v voxels comprising the 3D volume of the brain as the variables, we suggest the usage of the vt hypervoxels comprising the 4D volume of the brain-over-session as the variables. A linear model is fitted to the 4D volumes originating from different sessions. Parameter estimation and hypothesis testing in this model can be performed with standard techniques. The hypothesis testing generates 4D statistical images (SIs) to which any relevant test statistic can be applied. In this paper we describe two test statistics, one voxel based and one cluster based, that can be used to test a range of hypotheses. There are several benefits in treating the data from each session as one observation, two of which are: (i) the temporal characteristics of the signal can be investigated without an explicit model for the blood oxygenation level dependent (BOLD) contrast response function, and (ii) the observations (sessions) can be assumed to be independent and hence inference on the 4D SI can be made by nonparametric or Monte Carlo methods. The suggested 4D approach is applied to FMRI data and is shown to accurately detect the expected signal
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, W. J. M. (1969). R.N. Haber, Contemporary theory and research in visual perception [Book review]. Nederlands tijdschrift voor de psychologie, 24, 463-464.
  • Levelt, W. J. M. (1969). A re-analysis of some adjective/noun intersection data. Heymans Bulletins, HB-69-31EX.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M. (1969). E.J. Brière, A psycholinguistic study of phonological interference [Book review]. Lingua, 22, 119-120.
  • Levelt, W. J. M., Zwanenburg, W., & Ouweneel, G. R. E. (1969). Ambiguous surface structure and phonetic form in French. Heymans Bulletins, (HB-69-28EX).
  • Levelt, W. J. M. (1969). Hierarchical chunking in sentence processing. Heymans Bulletins, HB-69-31EX.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M. (1969). Psychological representations of syntactic structures. Heymans Bulletins, HB-69-36EX.
  • Levelt, W. J. M. (1969). R.M. Warren en R.P. Warren, Helmholtz on perception, its physiology and development [Book review]. Nederlands tijdschrift voor de psychologie, 24, 463-464.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M., & Ouweneel, G. R. E. (1969). The perception of French sentences with a surface ambiguity. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 24, 245-248.
  • Levelt, W. J. M. (1969). The perception of syntactic structure. Heymans Bulletins, HB-69-30EX.
  • Levelt, W. J. M. (1969). The scaling of syntactic relatedness: A new method in psycholinguistic research. Psychonomic Science, 17(6), 351-352.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levinson, S. C. (2000). Yélî Dnye and the theory of basic color terms. Journal of Linguistic Anthropology, 10( 1), 3-55. doi:10.1525/jlin.2000.10.1.3.

    Abstract

    The theory of basic color terms was a crucial factor in the demise of linguistic relativity. The theory is now once again under scrutiny and fundamental revision. This article details a case study that undermines one of the central claims of the classical theory, namely that languages universally treat color as a unitary domain, to be exhaustively named. Taken together with other cases, the study suggests that a number of languages have only an incipient color terminology, raising doubts about the linguistic universality of such terminology.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Meyer, A. S., & Levelt, W. J. M. (2000). Merging speech perception and production [Comment on Norris, McQueen and Cutler]. Behavioral and Brain Sciences, 23(3), 339-340. doi:10.1017/S0140525X00373241.

    Abstract

    A comparison of Merge, a model of comprehension, and WEAVER, a model of production, raises five issues: (1) merging models of comprehension and production necessarily creates feedback; (2) neither model is a comprehensive account of word processing; (3) the models are incomplete in different ways; (4) the models differ in their handling of competition; (5) as opposed to WEAVER, Merge is a model of metalinguistic behavior.
  • Meyer, A. S., & Van der Meulen, F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin & Review, 7, 314-319.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Norris, D., McQueen, J. M., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal constraints on speech segmentation. Language and Cognitive Processes, 16, 637-660. doi:10.1080/01690960143000119.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and any likely location of a word boundary, as cued in the speech signal. The experiments examined cases where the residue was either a CVC syllable with a schwa, or a CV syllable with a lax vowel. Although neither of these syllable contexts is a possible lexical word in English, word-spotting in both contexts was easier than in a context consisting of a single consonant. Two control lexical-decision experiments showed that the word-spotting results reflected the relative segmentation difficulty of the words in different contexts. The PWC appears to be language-universal rather than language-specific.
  • Nyberg, L., Petersson, K. M., Nilsson, L.-G., Sandblom, J., Åberg, C., & Ingvar, M. (2001). Reactivation of motor brain areas during explicit memory for actions. Neuroimage, 14, 521-528. doi:10.1006/nimg.2001.0801.

    Abstract

    Recent functional brain imaging studies have shown that sensory-specific brain regions that are activated during perception/encoding of sensory-specific information are reactivated during memory retrieval of the same information. Here we used PET to examine whether verbal retrieval of action phrases is associated with reactivation of motor brain regions if the actions were overtly or covertly performed during encoding. Compared to a verbal condition, encoding by means of overt as well as covert activity was associated with differential activity in regions in contralateral somatosensory and motor cortex. Several of these regions were reactivated during retrieval. Common to both the overt and covert conditions was reactivation of regions in left ventral motor cortex and left inferior parietal cortex. A direct comparison of the overt and covert activity conditions showed that activation and reactivation of left dorsal parietal cortex and right cerebellum was specific to the overt condition. These results support the reactivation hypothesis by showing that verbal-explicit memory of actions involves areas that are engaged during overt and covert motor activity.
  • Petersson, K. M., Reis, A., & Ingvar, M. (2001). Cognitive processing in literate and illiterate subjects: A review of some recent behavioral and functional neuroimaging data. Scandinavian Journal of Psychology, 42, 251-267. doi:10.1111/1467-9450.00235.

    Abstract

    The study of illiterate subjects, which for specific socio-cultural reasons did not have the opportunity to acquire basic reading and writing skills, represents one approach to study the interaction between neurobiological and cultural factors in cognitive development and the functional organization of the human brain. In addition the naturally occurring illiteracy may serve as a model for studying the influence of alphabetic orthography on auditory-verbal language. In this paper we have reviewed some recent behavioral and functional neuroimaging data indicating that learning an alphabetic written language modulates the auditory-verbal language system in a non-trivial way and provided support for the hypothesis that the functional architecture of the brain is modulated by literacy. We have also indicated that the effects of literacy and formal schooling is not limited to language related skills but appears to affect also other cognitive domains. In particular, we indicate that formal schooling influences 2D but not 3D visual naming skills. We have also pointed to the importance of using ecologically relevant tasks when comparing literate and illiterate subjects. We also demonstrate the applicability of a network approach in elucidating differences in the functional organization of the brain between groups. The strength of such an approach is the ability to study patterns of interactions between functionally specialized brain regions and the possibility to compare such patterns of brain interactions between groups or functional states. This complements the more commonly used activation approach to functional neuroimaging data, which characterize functionally specialized regions, and provides important data characterizing the functional interactions between these regions.
  • Petersson, K. M., Reis, A., Askelöf, S., Castro-Caldas, A., & Ingvar, M. (2000). Language processing modulated by literacy: A network analysis of verbal repetition in literate and illiterate subjects. Journal of Cognitive Neuroscience, 12(3), 364-382. doi:10.1162/089892900562147.
  • Petersson, K. M., Sandblom, J., Gisselgard, J., & Ingvar, M. (2001). Learning related modulation of functional retrieval networks in man. Scandinavian Journal of Psychology, 42, 197-216. doi:10.1111/1467-9450.00231.
  • Petrovic, P., Petersson, K. M., Ghatan, P., Stone-Elander, S., & Ingvar, M. (2000). Pain related cerebral activation is altered by a distracting cognitive task. Pain, 85, 19-30.

    Abstract

    It has previously been suggested that the activity in sensory regions of the brain can be modulated by attentional mechanisms during parallel cognitive processing. To investigate whether such attention-related modulations are present in the processing of pain, the regional cerebral blood ¯ow was measured using [15O]butanol and positron emission tomography in conditions involving both pain and parallel cognitive demands. The painful stimulus consisted of the standard cold pressor test and the cognitive task was a computerised perceptual maze test. The activations during the maze test reproduced findings in previous studies of the same cognitive task. The cold pressor test evoked signi®cant activity in the contralateral S1, and bilaterally in the somatosensory association areas (including S2), the ACC and the mid-insula. The activity in the somatosensory association areas and periaqueductal gray/midbrain were significantly modified, i.e. relatively decreased, when the subjects also were performing the maze task. The altered activity was accompanied with significantly lower ratings of pain during the cognitive task. In contrast, lateral orbitofrontal regions showed a relative increase of activity during pain combined with the maze task as compared to only pain, which suggests the possibility of the involvement of frontal cortex in modulation of regions processing pain
  • Poletiek, F. H. (2000). De beoordelaar dobbelt niet - denkt hij. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 55(5), 246-249.
  • Poletiek, F. H., & Berndsen, M. (2000). Hypothesis testing as risk behaviour with regard to beliefs. Journal of Behavioral Decision Making, 13(1), 107-123. doi:10.1002/(SICI)1099-0771(200001/03)13:1<107:AID-BDM349>3.0.CO;2-P.

    Abstract

    In this paper hypothesis‐testing behaviour is compared to risk‐taking behaviour. It is proposed that choosing a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence. This consideration resembles the one a gambler makes when choosing among bets, each having a probability of winning and an amount to be won. A confirmatory testing strategy can be defined within this framework as a strategy directed at maximizing either the probability or the value of a confirming outcome. Previous theories on testing behaviour have focused on the human tendency to maximize the probability of a confirming outcome. In this paper, two experiments are presented in which participants tend to maximize the confirming value of the test outcome. Motivational factors enhance this tendency dependent on the context of the testing situation. Both this result and the framework are discussed in relation to other studies in the field of testing behaviour.
  • Quené, H., & Janse, E. (2001). Word perception in time-compressed speech [Abstract]. Journal of the Acoustical Society of America, 110, 2738.

    Abstract

    ASA conference abstract
  • Reis, A., Petersson, K. M., Castro-Caldas, A., & Ingvar, M. (2001). Formal schooling influences two- but not three-dimensional naming skills. Brain and Cognition, 47, 397-411. doi:doi:10.1006/brcg.2001.1316.

    Abstract

    The modulatory influence of literacy on the cognitive system of the human brain has been indicated in behavioral, neuroanatomic, and functional neuroimaging studies. In this study we explored the functional consequences of formal education and the acquisition of an alphabetic written language on two- and three-dimensional visual naming. The results show that illiterate subjects perform significantly worse on immediate naming of two-dimensional representations of common everyday objects compared to literate subjects, both in terms of accuracy and reaction times. In contrast, there was no significant difference when the subjects named the corresponding real objects. The results suggest that formal education and learning to read and to write modulate the cognitive process involved in processing two- but not three-dimensional representations of common everyday objects. Both the results of the reaction time and the error pattern analyses can be interpreted as indicating that the major influence of literacy affects the visual system or the interaction between the visual and the language systems. We suggest that the visual system in a wide sense and/or the interface between the visual and the language system are differently formatted in literate and illiterate subjects. In other words, we hypothesize that the pattern of interactions in the functional–anatomical networks subserving visual naming, that is, the interactions within and between the visual and language processing networks, differ in literate and illiterate subjects
  • Robinson, J. D., & Stivers, T. (2001). Achieving activity transitions in primary-care encounters: From history taking to physical examination. Human Communication Research, 27(2), 253-298. doi:10.1111/j.1468-2958.2001.tb00782.x.
  • Rowland, C. F., & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: what children do know? Journal of Child Language, 27(1), 157-181.

    Abstract

    The present paper reports an analysis of correct wh-question production and subject–auxiliary inversion errors in one child's early wh-question data (age 2; 3.4 to 4; 10.23). It is argued that two current movement rule accounts (DeVilliers, 1991; Valian, Lasser & Mandelbaum, 1992) cannot explain the patterning of early wh-questions. However, the data can be explained in terms of the child's knowledge of particular lexically-specific wh-word+auxiliary combinations, and the pattern of inversion and uninversion predicted from the relative frequencies of these combinations in the mother's speech. The results support the claim that correctly inverted wh-questions can be produced without access to a subject–auxiliary inversion rule and are consistent with the constructivist claim that a distributional learning mechanism that learns and reproduces lexically-specific formulae heard in the input can explain much of the early multi-word speech data. The implications of these results for movement rule-based and constructivist theories of grammatical development are discussed.
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. Neurocomputing, 32(33), 987-994. doi:10.1016/S0925-2312(00)00270-8.

    Abstract

    Capacity limited memory systems need to gradually forget old information in order to avoid catastrophic forgetting where all stored information is lost. This can be achieved by allowing new information to overwrite old, as in the so-called palimpsest memory. This paper describes a new such learning rule employed in an attractor neural network. The network does not exhibit catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits recency e!ects in retrieval
  • Sandberg, A., Lansner, A., & Petersson, K. M. (2001). Selective enhancement of recall through plasticity modulation in an autoassociative memory. Neurocomputing, 38(40), 867-873. doi:10.1016/S0925-2312(01)00363-0.

    Abstract

    The strength of a memory trace is modulated by a variety of factors such as arousal, attention, context, type of processing during encoding, salience and novelty of the experience. Some of these factors can be modeled as a variable plasticity level in the memory system, controlled by arousal or relevance-estimating systems. We demonstrate that a Bayesian confidence propagation neural network with learning time constant modulated in this way exhibits enhanced recall of an item tagged as salient. Proactive and retroactive inhibition of other items is also demonstrated as well as an inverted U-shape response to overall plasticity
  • Schiller, N. O., Greenhall, J. A., Shelton, J. R., & Caramazza, A. (2001). Serial order effects in spelling errors: Evidence from two dysgraphic patients. Neurocase, 7, 1-14. doi:10.1093/neucas/7.1.1.

    Abstract

    This study reports data from two dysgraphic patients, TH and PB, whose errors in spelling most often occurred in the final part of words. The probability of making an error increased monotonically towards the end of words. Long words were affected more than short words, and performance was similar across different output modalities (writing, typing and oral spelling). This error performance was found despite the fact that both patients showed normal ability to repeat the same words orally and to access their full spelling in tasks that minimized the involvement of working memory. This pattern of performance locates their deficit to the mechanism that keeps graphemic representations active for further processing, and shows that the functioning of this mechanism is not controlled or "refreshed" by phonological (or articulatory) processes. Although the overall performance pattern is most consistent with a deficit to the graphemic buffer, the strong tendency for errors to occur at the ends of words is unlike many classic "graphemic buffer patients" whose errors predominantly occur at word-medial positions. The contrasting patterns are discussed in terms of different types of impairment to the graphemic buffer.
  • Senft, G. (2001). [Review of the book Handbook of language and ethnic identity ed. by Joshua A. Fishman]. Linguistics, 39, 188-190. doi:10.1515/ling.2001.004.
  • Senft, G. (2001). [Review of the book Language Death by David Crystal]. Linguistics, 39, 815-822. doi:10.1515/ling.2001.032.
  • Senft, G. (2000). [Review of the book Language, identity, and marginality in Indonesia: The changing nature of ritual speech on the island of Sumba by Joel C. Kuipers]. Linguistics, 38, 435-441. doi:10.1515/ling.38.2.435.
  • Senft, G. (2001). [Review of the book Malinowski's Kiriwina: Fieldwork photography 1915-1918 by Michael W. Young]. Paideuma, 47, 260-263.
  • Senft, G. (2001). [Review of the CD Betel Nuts by Christopher Roberts (1996)]. Kulele, 3, 115-122.

    Abstract

    (TMCD 9602). Taipei: Trees Music & Art, 12-1, Lane 10, Sec. 2, Hsin Yi Rd. Taipei, TAIWAN. Distributed by Sony Music Entertainment (Taiwan)Ltd.,6th fl. No 35 , Lane 11, Kwang-Fu N. Rd., Taipei TAIWAN (CD accompanied by a full color bucklet)
  • Senft, G. (2001). Frames of spatial reference in Kilivila. Studies in Language, 25(3), 521-555. doi:10.1075/sl.25.3.05sen.

    Abstract

    Members of the MPI for Psycholinguistics are researching the interrelationship between language, cognition and the conceptualization of space in various languages. Research results show that there are three frames of spatial reference, the absolute, the relative, and the intrinsic frame of reference. This study first presents results of this research in general and then discusses the results for Kilivila. Speakers of this Austronesian language prefer the intrinsic frame of reference for the location of objects with respect to each other in a given spatial configuration. But they prefer an absolute frame of reference system in referring to the spatial orientation of objects in a given
    spatial configuration. Moreover, the hypothesis is confirmed that languages seem to influence the choice and the kind of conceptual parameters their speakers use to solve non-verbal problems within the domain of space.

Share this page