Publications

Displaying 201 - 300 of 436
  • Klein, W. (1975). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 5(18), 7-8.
  • Klein, W. (1986). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 16(62), 9-10.
  • Klein, W. (Ed.). (1995). Epoche [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (100).
  • Klein, W., & Jungbluth, K. (Eds.). (2002). Deixis [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 125.
  • Klein, W., & Jungbluth, K. (2002). Einleitung - Introduction. Zeitschrift für Literaturwissenschaft und Linguistik, 125, 5-9.
  • Klein, W. (1995). Literaturwissenschaft, Linguistik, LiLi. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (100), 1-10.
  • Klein, W. (1982). Pronoms personnels et formes d'acquisition. Encrages, 8/9, 42-46.
  • Klein, W. (Ed.). (1975). Sprache ausländischer Arbeiter [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (18).
  • Klein, W. (Ed.). (1986). Sprachverfall [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (62).
  • Klein, W. (1999). Wie sich das deutsche Perfekt zusammensetzt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (113), 52-85.
  • Klein, W., & Dimroth, C. (Eds.). (2009). Worauf kann sich der Sprachunterricht stützen? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 153.
  • Klein, W. (1975). Zur Sprache ausländischer Arbeiter: Syntaktische Analysen und Aspekte des kommunikativen Verhaltens. Zeitschrift für Literaturwissenschaft und Linguistik, 18, 78-121.
  • Klein, W. (Ed.). (1982). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (45).
  • Klein, W. (1986). Über Ansehen und Wirkung der deutschen Sprachwissenschaft heute. Linguistische Berichte, 100, 511-520.
  • Knosche, T. R., & Bastiaansen, M. C. M. (2002). On the time resolution of event-related desynchronization: A simulation study. Clinical Neurophysiology, 113(5), 754-763. doi:10.1016/S1388-2457(02)00055-X.

    Abstract

    Objectives: To investigate the time resolution of different methods for the computation of event-related desynchronization/synchronization (ERD/ERS), including one based on Hilbert transform. Methods: In order to better understand the time resolution of ERD/ERS, which is a function of factors such as the exact computation method, the frequency under study, the number of trials, and the sampling frequency, we simulated sudden changes in oscillation amplitude as well as very short and closely spaced events. Results: Hilbert-based ERD yields very similar results to ERD integrated over predefined time intervals (block ERD), if the block length is half the period length of the studied frequency. ERD predicts the onset of a change in oscillation amplitude with an error margin of only 10–30 ms. On the other hand, the time the ERD response needs to climb to its full height after a sudden change in oscillation amplitude is quite long, i.e. between 200 and 500 ms. With respect to sensitivity to short oscillatory events, the ratio between sampling frequency and electroencephalographic frequency band plays a major role. Conclusions: (1) The optimal time interval for the computation of block ERD is half a period of the frequency under investigation. (2) Due to the slow impulse response, amplitude effects in the ERD may in reality be caused by duration differences. (3) Although ERD based on the Hilbert transform does not yield any significant advantages over classical ERD in terms of time resolution, it has some important practical advantages.
  • Konopka, A. E., & Bock, K. (2009). Lexical or syntactic control of sentence formulation? Structural generalizations from idiom production. Cognitive Psychology, 58, 68-101. doi:10.1016/j.cogpsych.2008.05.002.

    Abstract

    To compare abstract structural and lexicalist accounts of syntactic processes in sentence formulation, we examined the effectiveness of nonidiomatic and idiomatic phrasal verbs in inducing structural generalizations. Three experiments made use of a syntactic priming paradigm in which participants recalled sentences they had read in rapid serial visual presentation. Prime and target sentences contained phrasal verbs with particles directly following the verb (pull off a sweatshirt) or following the direct object (pull a sweatshirt off). Idiomatic primes used verbs whose figurative meaning cannot be straightforwardly derived from the literal meaning of the main verb (e.g., pull off a robbery) and are commonly treated as stored lexical units. Particle placement in sentences was primed by both nonidiomatic and idiomatic verbs. Experiment 1 showed that the syntax of idiomatic and nonidiomatic phrasal verbs is amenable to priming, and Experiments 2 and 3 compared the priming patterns created by idiomatic and nonidiomatic primes. Despite differences in idiomaticity and structural flexibility, both types of phrasal verbs induced structural generalizations and differed little in their ability to do so. The findings are interpreted in terms of the role of abstract structural processes in language production.
  • Konopka, A. E., & Benjamin, A. (2009). Schematic knowledge changes what judgments of learning predict in a source memory task. Memory & Cognition, 37(1), 42-51. doi:10.3758/MC.37.1.42.

    Abstract

    Source monitoring can be influenced by information that is external to the study context, such as beliefs and general knowledge (Johnson, Hashtroudi, & Lindsay, 1993). We investigated the extent to which metamnemonic judgments predict memory for items and sources when schematic information about the sources is or is not provided at encoding. Participants made judgments of learning (JOLs) to statements presented by two speakers and were informed of the occupation of each speaker either before or after the encoding session. Replicating earlier work, prior knowledge decreased participants' tendency to erroneously attribute statements to schematically consistent but episodically incorrect speakers. The origin of this effect can be understood by examining the relationship between JOLs and performance: JOLs were equally predictive of item and source memory in the absence of prior knowledge, but were exclusively predictive of source memory when participants knew of the relationship between speakers and statements during study. Background knowledge determines the information that people solicit in service of metamnemonic judgments, suggesting that these judgments reflect control processes during encoding that reduce schematic errors.
  • Kooijman, V., Hagoort, P., & Cutler, A. (2009). Prosodic structure in early word segmentation: ERP evidence from Dutch ten-month-olds. Infancy, 14, 591 -612. doi:10.1080/15250000903263957.

    Abstract

    Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress-based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong-weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong-weak words but also words with a weak-strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak-strong word segmentation in Dutch 10-month-olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak-strong words.
  • Kopecka, A. (2009). L'expression du déplacement en Français: L'interaction des facteurs sémantiques, aspectuels et pragmatiques dans la construction du sens spatial. Langages, 173, 54-75.

    Abstract

    The paper investigates the use of manner verbs (e.g. marcher 'to walk', courir 'to run') with so-called locative prepositions (e.g. dans 'in', sous 'under') in the descriptions of motion in French, as in Il a couru dans le bureau 'He ran in (to) the office', to explore the type of events such constructions express and the factors that influence their interpretation. Based on an extensive corpus survey, the study shows that, contrary to the general claim according to which such constructions express typically motion in some location, they are also frequently used to express change of location. The study discusses the interplay of various factors that contribute to the interpretation of these constructions, including semantic, aspectual and pragmatic factors.
  • Koten Jr., J. W., Wood, G., Hagoort, P., Goebel, R., Propping, P., Willmes, K., & Boomsma, D. I. (2009). Genetic contribution to variation in cognitive function: An fMRI study in twins. Science, 323(5922), 1737-1740. doi:10.1126/science.1167371.

    Abstract

    Little is known about the genetic contribution to individual differences in neural networks subserving cognition function. In this functional magnetic resonance imaging (fMRI) twin study, we found a significant genetic influence on brain activation in neural networks supporting digit working memory tasks. Participants activating frontal-parietal networks responded faster than individuals relying more on language-related brain networks.There were genetic influences on brain activation in language-relevant brain circuits that were atypical for numerical working memory tasks as such. This suggests that differences in cognition might be related to brain activation patterns that differ qualitatively among individuals.
  • Küntay, A. C., & Slobin, D. I. (2002). Putting interaction back into child language: Examples from Turkish. Psychology of Language and Communication, 6(1): 14.

    Abstract

    As in the case of other non-English languages, the study of the acquisition of Turkish has mostly focused on aspects of grammatical morphology and syntax, largely neglecting the study of the effect of interactional factors on child morphosyntax. This paper reviews indications from past research that studying input and adult-child discourse can facilitate the study of the acquisition of morphosyntax in the Turkish language. It also provides some recent studies of Turkish child language on the relationship of child-directed speech to the early acquisition of morphosyntax, and on the pragmatic features of a certain kind of discourse form in child-directed speech called variation sets.
  • Kurt, S., Groszer, M., Fisher, S. E., & Ehret, G. (2009). Modified sound-evoked brainstem potentials in Foxp2 mutant mice. Brain Research, 1289, 30-36. doi:10.1016/j.brainres.2009.06.092.

    Abstract

    Heterozygous mutations of the human FOXP2 gene cause a developmental disorder involving impaired learning and production of fluent spoken language. Previous investigations of its aetiology have focused on disturbed function of neural circuits involved in motor control. However, Foxp2 expression has been found in the cochlea and auditory brain centers and deficits in auditory processing could contribute to difficulties in speech learning and production. Here, we recorded auditory brainstem responses (ABR) to assess two heterozygous mouse models carrying distinct Foxp2 point mutations matching those found in humans with FOXP2-related speech/language impairment. Mice which carry a Foxp2-S321X nonsense mutation, yielding reduced dosage of Foxp2 protein, did not show systematic ABR differences from wildtype littermates. Given that speech/language disorders are observed in heterozygous humans with similar nonsense mutations (FOXP2-R328X), our findings suggest that auditory processing deficits up to the midbrain level are not causative for FOXP2-related language impairments. Interestingly, however, mice harboring a Foxp2-R552H missense mutation displayed systematic alterations in ABR waves with longer latencies (significant for waves I, III, IV) and smaller amplitudes (significant for waves I, IV) suggesting that either the synchrony of synaptic transmission in the cochlea and in auditory brainstem centers is affected, or fewer auditory nerve fibers and fewer neurons in auditory brainstem centers are activated compared to wildtypes. Therefore, the R552H mutation uncovers possible roles for Foxp2 in the development and/or function of the auditory system. Since ABR audiometry is easily accessible in humans, our data call for systematic testing of auditory functions in humans with FOXP2 mutations.
  • Lai, V. T., Curran, T., & Menn, L. (2009). Comprehending conventional and novel metaphors: An ERP study. Brain Research, 1284, 145-155. doi:10.1016/j.brainres.2009.05.088.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2009). Reply to: "Can CBT substantially change grey matter volume in chronic fatigue syndrome" [Letter to the editor]. Brain, 132(6), e111. doi:10.1093/brain/awn208.
  • De Lange, F., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). Reply: Change in grey matter volume cannot be assumed to be due to cognitive behavioural therapy [Letter to the editor]. Brain, 132(7), e120. doi:10.1093/brain/awn359.
  • De Lange, F. P., Knoop, H., Bleijenberg, G., Van der Meer, J. W. M., Hagoort, P., & Toni, I. (2009). The experience of fatigue in the brain [Letter to the editor]. Psychological Medicine, 39, 523-524. doi:10.1017/S0033291708004844.
  • Lausberg, H., & Sloetjes, H. (2009). Coding gestural behavior with the NEUROGES-ELAN system. Behavior Research Methods, Instruments, & Computers, 41(3), 841-849. doi:10.3758/BRM.41.3.841.

    Abstract

    We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (1999). A developmental grammar for syllable structure in the production of child language. Brain and Language, 68, 291-299.

    Abstract

    The order of acquisition of Dutch syllable types by first language learners is analyzed as following from an initial ranking and subsequent rerankings of constraints in an optimality theoretic grammar. Initially, structural constraints are all ranked above faithfulness constraints, leading to core syllable (CV) productions only. Subsequently, faithfulness gradually rises to the highest position in the ranking, allowing more and more marked syllable types to appear in production. Local conjunctions of Structural constraints allow for a more detailed analysis.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1-38. doi:10.1017/S0140525X99001776.

    Abstract

    Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER + +. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting. for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.
  • Levelt, W. J. M. (1999). Models of word production. Trends in Cognitive Sciences, 3, 223-232.

    Abstract

    Research on spoken word production has been approached from two angles. In one research tradition, the analysis of spontaneous or induced speech errors led to models that can account for speech error distributions. In another tradition, the measurement of picture naming latencies led to chronometric models accounting for distributions of reaction times in word production. Both kinds of models are, however, dealing with the same underlying processes: (1) the speaker’s selection of a word that is semantically and syntactically appropriate; (2) the retrieval of the word’s phonological properties; (3) the rapid syllabification of the word in context; and (4) the preparation of the corresponding articulatory gestures. Models of both traditions explain these processes in terms of activation spreading through a localist, symbolic network. By and large, they share the main levels of representation: conceptual/semantic, syntactic, phonological and phonetic. They differ in various details, such as the amount of cascading and feedback in the network. These research traditions have begun to merge in recent years, leading to highly constructive experimentation. Currently, they are like two similar knives honing each other. A single pair of scissors is in the making.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). Multiple perspectives on lexical access [authors' response ]. Behavioral and Brain Sciences, 22, 61-72. doi:10.1017/S0140525X99451775.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (1999). Maxim. Journal of Linguistic Anthropology, 9, 144-147. doi:10.1525/jlin.1999.9.1-2.144.
  • Levinson, S. C., & Burenhult, N. (2009). Semplates: A new concept in lexical semantics? Language, 85, 153-174. doi:10.1353/lan.0.0090.

    Abstract

    This short report draws attention to an interesting kind of configuration in the lexicon that seems to have escaped theoretical or systematic descriptive attention. These configurations, which we dub SEMPLATES, consist of an abstract structure or template, which is recurrently instantiated in a number of lexical sets, typically of different form classes. A number of examples from different language families are adduced, and generalizations made about the nature of semplates, which are contrasted to other, perhaps similar, phenomena
  • Liljeström, M., Hulten, A., Parkkonen, L., & Salmelin, R. (2009). Comparing MEG and fMRI views to naming actions and objects. Human Brain Mapping, 30, 1845-1856. doi:10.1002/hbm.20785.

    Abstract

    Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygenlevel dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions
  • Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654-660.

    Abstract

    One of the defining features of human language is displacement, the ability to make reference to absent entities. Here we show that prelinguistic, 12-month-old infants already can use a nonverbal pointing gesture to make reference to absent entities. We also show that chimpanzees—who can point for things they want humans to give them—do not point to refer to absent entities in the same way. These results demonstrate that the ability to communicate about absent but mutually known entities depends not on language, but rather on deeper social-cognitive skills that make acts of linguistic reference possible in the first place. These nonlinguistic skills for displaced reference emerged apparently only after humans' divergence from great apes some 6 million years ago.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Massaro, D. W., & Jesse, A. (2009). Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication, 51(7), 604-621. doi:10.1016/j.specom.2008.05.013.

    Abstract

    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., Norris, D., & Cutler, A. (1999). Lexical influence in phonetic decision-making: Evidence from subcategorical mismatches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1363-1389. doi:10.1037/0096-1523.25.5.1363.

    Abstract

    In 5 experiments, listeners heard words and nonwords, some cross-spliced so that they contained acoustic-phonetic mismatches. Performance was worse on mismatching than on matching items. Words cross-spliced with words and words cross-spliced with nonwords produced parallel results. However, in lexical decision and 1 of 3 phonetic decision experiments, performance on nonwords cross-spliced with words was poorer than on nonwords cross-spliced with nonwords. A gating study confirmed that there were misleading coarticulatory cues in the cross-spliced items; a sixth experiment showed that the earlier results were not due to interitem differences in the strength of these cues. Three models of phonetic decision making (the Race model, the TRACE model, and a postlexical model) did not explain the data. A new bottom-up model is outlined that accounts for the findings in terms of lexical involvement at a dedicated decision-making stage.
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • Mead, S., Poulter, M., Uphill, J., Beck, J., Whitfield, J., Webb, T. E., Campbell, T., Adamson, G., Deriziotis, P., Tabrizi, S. J., Hummerich, H., Verzilli, C., Alpers, M. P., Whittaker, J. C., & Collinge, J. (2009). Genetic risk factors for variant Creutzfeldt-Jakob disease: A genome-wide association study. Lancet Neurology, 8(1), 57-66. doi:10.1016/S1474-4422(08)70265-5.

    Abstract

    BACKGROUND: Human and animal prion diseases are under genetic control, but apart from PRNP (the gene that encodes the prion protein), we understand little about human susceptibility to bovine spongiform encephalopathy (BSE) prions, the causal agent of variant Creutzfeldt-Jakob disease (vCJD).METHODS: We did a genome-wide association study of the risk of vCJD and tested for replication of our findings in samples from many categories of human prion disease (929 samples) and control samples from the UK and Papua New Guinea (4254 samples), including controls in the UK who were genotyped by the Wellcome Trust Case Control Consortium. We also did follow-up analyses of the genetic control of the clinical phenotype of prion disease and analysed candidate gene expression in a mouse cellular model of prion infection. FINDINGS: The PRNP locus was strongly associated with risk across several markers and all categories of prion disease (best single SNP [single nucleotide polymorphism] association in vCJD p=2.5 x 10(-17); best haplotypic association in vCJD p=1 x 10(-24)). Although the main contribution to disease risk was conferred by PRNP polymorphic codon 129, another nearby SNP conferred increased risk of vCJD. In addition to PRNP, one technically validated SNP association upstream of RARB (the gene that encodes retinoic acid receptor beta) had nominal genome-wide significance (p=1.9 x 10(-7)). A similar association was found in a small sample of patients with iatrogenic CJD (p=0.030) but not in patients with sporadic CJD (sCJD) or kuru. In cultured cells, retinoic acid regulates the expression of the prion protein. We found an association with acquired prion disease, including vCJD (p=5.6 x 10(-5)), kuru incubation time (p=0.017), and resistance to kuru (p=2.5 x 10(-4)), in a region upstream of STMN2 (the gene that encodes SCG10). The risk genotype was not associated with sCJD but conferred an earlier age of onset. Furthermore, expression of Stmn2 was reduced 30-fold post-infection in a mouse cellular model of prion disease. INTERPRETATION: The polymorphic codon 129 of PRNP was the main genetic risk factor for vCJD; however, additional candidate loci have been identified, which justifies functional analyses of these biological pathways in prion disease.
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of Cognitive Neuroscience, 21, 2358-2368. doi:10.1162/jocn.2008.21163.

    Abstract

    Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies more acceptable or plausible. We predicted that the effect of world knowledge anomalies would be weaker for the local context. World knowledge effects have previously been observed in the left inferior frontal region (Brodmann's area 45/47). In the current study, an effect of world knowledge was present in this region in the neutral context. We also observed an effect in the right inferior frontal gyrus, which was more sensitive to the discourse manipulation than the left inferior frontal gyrus. In addition, the left angular gyrus reacted strongly to the degree of discourse coherence between the context and critical sentence. Overall, both world knowledge and the discourse context affect the process of meaning unification, but do so by recruiting partly different sets of brain areas.
  • Menon, S., Rosenberg, K., Graham, S. A., Ward, E. M., Taylor, M. E., Drickamer, K., & Leckband, D. E. (2009). Binding-site geometry and flexibility in DC-SIGN demonstrated with surface force measurements. PNAS, 106, 11524-11529. doi:10.1073/pnas.0901783106.

    Abstract

    The dendritic cell receptor DC-SIGN mediates pathogen recognition by binding to glycans characteristic of pathogen surfaces, including those found on HIV. Clustering of carbohydrate-binding sites in the receptor tetramer is believed to be critical for targeting of pathogen glycans, but the arrangement of these sites remains poorly understood. Surface force measurements between apposed lipid bilayers displaying the extracellular domain of DC-SIGN and a neoglycolipid bearing an oligosaccharide ligand provide evidence that the receptor is in an extended conformation and that glycan docking is associated with a conformational change that repositions the carbohydrate-recognition domains during ligand binding. The results further show that the lateral mobility of membrane-bound ligands enhances the engagement of multiple carbohydrate-recognition domains in the receptor oligomer with appropriately spaced ligands. These studies highlight differences between pathogen targeting by DC-SIGN and receptors in which binding sites at fixed spacing bind to simple molecular patterns

    Additional information

    Menon_2009_Supporting_Information.pdf
  • Meyer, A. S., & Bock, K. (1999). Representations and processes in the production of pronouns: Some perspectives from Dutch. Journal of Memory and Language, 41(2), 281-301. doi:doi:10.1006/jmla.1999.2649.

    Abstract

    The production and interpretation of pronouns involves the identification of a mental referent and, in connected speech or text, a discourse antecedent. One of the few overt signals of the relationship between a pronoun and its antecedent is agreement in features such as number and grammatical gender. To examine how speakers create these signals, two experiments tested conceptual, lexical. and morphophonological accounts of pronoun production in Dutch. The experiments employed sentence completion and continuation tasks with materials containing noun phrases that conflicted or agreed in grammatical gender. The noun phrases served as the antecedents for demonstrative pronouns tin Experiment 1) and relative pronouns tin Experiment 2) that required gender marking. Gender errors were used to assess the nature of the processes that established the link between pronouns and antecedents. There were more gender errors when candidate antecedents conflicted in grammatical gender, counter to the predictions of a pure conceptual hypothesis. Gender marking on candidate antecedents did not change the magnitude of this interference effect, counter to the predictions of an overt-morphology hypothesis. Mirroring previous findings about pronoun comprehension, the results suggest that speakers of gender-marking languages call on specific linguistic information about antecedents in order to select pronouns and that the information consists of specifications of grammatical gender associated with the lemmas of words.
  • Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.

    Abstract

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.
  • Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.

    Abstract

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.
  • Mitterer, H., Horschig, J. M., Müsseler, J., & Majid, A. (2009). The influence of memory on perception: It's not what things look like, it's what you call them. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(6), 1557-1562. doi:10.1037/a0017019.

    Abstract

    World knowledge influences how we perceive the world. This study shows that this influence is at least partly mediated by declarative memory. Dutch and German participants categorized hues from a yellow-to-orange continuum on stimuli that were prototypically orange or yellow and that were also associated with these color labels. Both groups gave more “yellow” responses if an ambiguous hue occurred on a prototypically yellow stimulus. The language groups were also tested on a stimulus (traffic light) that is associated with the label orange in Dutch and with the label yellow in German, even though the objective color is the same for both populations. Dutch observers categorized this stimulus as orange more often than German observers, in line with the assumption that declarative knowledge mediates the influence of world knowledge on color categorization.

    Files private

    Request files
  • Need, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I. and 13 moreNeed, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I., Hartmann, A. M., Möller, H.-J., Ruppert, A., Fraser, G., Crombie, C., Middleton, L. T., St. Clair, D., Roses, A. D., Muglia, P., Francks, C., Rujescu, D., Meltzer, H. Y., & Goldstein, D. B. (2009). A genome-wide investigation of SNPs and CNVs in schizophrenia. PLoS Genetics, 5(2), e1000373. doi:10.1371/journal.pgen.1000373.

    Abstract

    We report a genome-wide assessment of single nucleotide polymorphisms (SNPs) and copy number variants (CNVs) in schizophrenia. We investigated SNPs using 871 patients and 863 controls, following up the top hits in four independent cohorts comprising 1,460 patients and 12,995 controls, all of European origin. We found no genome-wide significant associations, nor could we provide support for any previously reported candidate gene or genome-wide associations. We went on to examine CNVs using a subset of 1,013 cases and 1,084 controls of European ancestry, and a further set of 60 cases and 64 controls of African ancestry. We found that eight cases and zero controls carried deletions greater than 2 Mb, of which two, at 8p22 and 16p13.11-p12.4, are newly reported here. A further evaluation of 1,378 controls identified no deletions greater than 2 Mb, suggesting a high prior probability of disease involvement when such deletions are observed in cases. We also provide further evidence for some smaller, previously reported, schizophrenia-associated CNVs, such as those in NRXN1 and APBA2. We could not provide strong support for the hypothesis that schizophrenia patients have a significantly greater “load” of large (>100 kb), rare CNVs, nor could we find common CNVs that associate with schizophrenia. Finally, we did not provide support for the suggestion that schizophrenia-associated CNVs may preferentially disrupt genes in neurodevelopmental pathways. Collectively, these analyses provide the first integrated study of SNPs and CNVs in schizophrenia and support the emerging view that rare deleterious variants may be more important in schizophrenia predisposition than common polymorphisms. While our analyses do not suggest that implicated CNVs impinge on particular key pathways, we do support the contribution of specific genomic regions in schizophrenia, presumably due to recurrent mutation. On balance, these data suggest that very few schizophrenia patients share identical genomic causation, potentially complicating efforts to personalize treatment regimens.
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V. and 11 moreNewbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V., Watson, J., Ragoussis, J., Fisher, S. E., Seckl, J. R., Helms, P. J., Bolton, P. F., Pickles, A., Conti-Ramsden, G., Baird, G., Bishop, D. V., & Monaco, A. P. (2009). CMIP and ATP2C2 modulate phonological short-term memory in language impairment. American Journal of Human Genetics, 85(2), 264-272. doi:10.1016/j.ajhg.2009.07.004.

    Abstract

    Specific language impairment (SLI) is a common developmental disorder haracterized by difficulties in language acquisition despite otherwise normal development and in the absence of any obvious explanatory factors. We performed a high-density screen of SLI1, a region of chromosome 16q that shows highly significant and consistent linkage to nonword repetition, a measure of phonological short-term memory that is commonly impaired in SLI. Using two independent language-impaired samples, one family-based (211 families) and another selected from a population cohort on the basis of extreme language measures (490 cases), we detected association to two genes in the SLI1 region: that encoding c-maf-inducing protein (CMIP, minP = 5.5 × 10−7 at rs6564903) and that encoding calcium-transporting ATPase, type2C, member2 (ATP2C2, minP = 2.0 × 10−5 at rs11860694). Regression modeling indicated that each of these loci exerts an independent effect upon nonword repetition ability. Despite the consistent findings in language-impaired samples, investigation in a large unselected cohort (n = 3612) did not detect association. We therefore propose that variants in CMIP and ATP2C2 act to modulate phonological short-term memory primarily in the context of language impairment. As such, this investigation supports the hypothesis that some causes of language impairment are distinct from factors that influence normal language variation. This work therefore implicates CMIP and ATP2C2 in the etiology of SLI and provides molecular evidence for the importance of phonological short-term memory in language acquisition.

    Additional information

    mmc1.pdf
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Newman-Norlund, S. E., Noordzij, M. L., Newman-Norlund, R. D., Volman, I. A., De Ruiter, J. P., Hagoort, P., & Toni, I. (2009). Recipient design in tacit communication. Cognition, 111, 46-54. doi:10.1016/j.cognition.2008.12.004.

    Abstract

    The ability to design tailored messages for specific listeners is an important aspect of
    human communication. The present study investigates whether a mere belief about an
    addressee’s identity influences the generation and production of a communicative message in
    a novel, non-verbal communication task. Participants were made to believe they were playing a game with a child or an adult partner, while a confederate acted as both child
    and adult partners with matched performance and response times. The participants’ belief
    influenced their behavior, spending longer when interacting with the presumed child
    addressee, but only during communicative portions of the game, i.e. using time as a tool
    to place emphasis on target information. This communicative adaptation attenuated with
    experience, and it was related to personality traits, namely Empathy and Need for Cognition
    measures. Overall, these findings indicate that novel nonverbal communicative interactions
    are selected according to a socio-centric perspective, and they are strongly
    influenced by participants’ traits.
  • Niemi, J., Laine, M., & Järvikivi, J. (2009). Paradigmatic and extraparadigmatic morphology in the mental lexicon: Experimental evidence for a dissociation. The mental lexicon, 4(1), 26-40. doi:10.1075/ml.4.1.02nie.

    Abstract

    The present study discusses psycholinguistic evidence for a difference between paradigmatic and extraparadigmatic morphology by investigating the processing of Finnish inflected and cliticized words. The data are derived from three sources of Finnish: from single-word reading performance in an agrammatic deep dyslectic speaker, as well as from visual lexical decision and wordness/learnability ratings of cliticized vs. inflected items by normal Finnish speakers. The agrammatic speaker showed awareness of the suffixes in multimorphemic words, including clitics, since he attempted to fill in this slot with morphological material. However, he never produced a clitic — either as the correct response or as an error — in any morphological configuration (simplex, derived, inflected, compound). Moreover, he produced more nominative singular errors for case-inflected nouns than he did for the cliticized words, a pattern that is expected if case-inflected forms were closely associated with their lexical heads, i.e., if they were paradigmatic and cliticized words were not. Furthermore, a visual lexical decision task with normal speakers of Finnish, showed an additional processing cost (longer latencies and more errors on cliticized than on case-inflected noun forms). Finally, a rating task indicated no difference in relative wordness between these two types of words. However, the same cliticized words were judged harder to learn as L2 items than the inflected words, most probably due to their conceptual/semantic properties, in other words due to their lack of word-level translation equivalents in SAVE languages. Taken together, the present results suggest that the distinction between paradigmatic and extraparadigmatic morphology is psychologically real.
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Nyberg, L., Forkstam, C., Petersson, K. M., Cabeza, R., & Ingvar, M. (2002). Brain imaging of human memory systems: Between-systems similarities and within-system differences. Cognitive Brain Research, 13(2), 281-292. doi:10.1016/S0926-6410(02)00052-6.

    Abstract

    There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Osterhout, L., & Hagoort, P. (1999). A superficial resemblance does not necessarily mean you are part of the family: Counterarguments to Coulson, King and Kutas (1998) in the P600/SPS-P300 debate. Language and Cognitive Processes, 14, 1-14. doi:10.1080/016909699386356.

    Abstract

    Two recent studies (Coulson et al., 1998;Osterhout et al., 1996)examined the
    relationship between the event-related brain potential (ERP) responses to linguistic syntactic anomalies (P600/SPS) and domain-general unexpected events (P300). Coulson et al. concluded that these responses are highly similar, whereas Osterhout et al. concluded that they are distinct. In this comment, we evaluate the relativemerits of these claims. We conclude that the available evidence indicates that the ERP response to syntactic anomalies is at least partially distinct from the ERP response to unexpected anomalies that do not involve a grammatical violation
  • Otake, T., & Cutler, A. (1999). Perception of suprasegmental structure in a nonnative dialect. Journal of Phonetics, 27, 229-253. doi:10.1006/jpho.1999.0095.

    Abstract

    Two experiments examined the processing of Tokyo Japanese pitchaccent distinctions by native speakers of Japanese from two accentlessvariety areas. In both experiments, listeners were presented with Tokyo Japanese speech materials used in an earlier study with Tokyo Japanese listeners, who clearly exploited the pitch-accent information in spokenword recognition. In the "rst experiment, listeners judged from which of two words, di!ering in accentual structure, isolated syllables had been extracted. Both new groups were, overall, as successful at this task as Tokyo Japanese speakers had been, but their response patterns differed from those of the Tokyo Japanese, for instance in that a bias towards H judgments in the Tokyo Japanese responses was weakened in the present groups' responses. In a second experiment, listeners heard word fragments and guessed what the words were; in this task, the speakers from accentless areas again performed significantly above chance, but their responses showed less sensitivity to the information in the input, and greater bias towards vocabulary distribution frequencies, than had been observed with the Tokyo Japanese listeners. The results suggest that experience with a local accentless dialect affects the processing of accent for word recognition in Tokyo Japanese, even for listeners with extensive exposure to Tokyo Japanese.
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Ozyurek, A. (2002). Do speakers design their co-speech gestures for their addresees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46(4), 688-704. doi:10.1006/jmla.2001.2826.

    Abstract

    Do speakers use spontaneous gestures accompanying their speech for themselves or to communicate their message to their addressees? Two experiments show that speakers change the orientation of their gestures depending on the location of shared space, that is, the intersection of the gesture spaces of the speakers and addressees. Gesture orientations change more frequently when they accompany spatial prepositions such as into and out, which describe motion that has a beginning and end point, rather than across, which depicts an unbounded path across space. Speakers change their gestures so that they represent the beginning and end point of motion INTO or OUT by moving into or out of the shared space. Thus, speakers design their gestures for their addressees and therefore use them to communicate. This has implications for the view that gestures are a part of language use as well as for the role of gestures in speech production.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Dynamic changes in the functional anatomy of the human brain during recall of abstract designs related to practice. Neuropsychologia, 37, 567-587.

    Abstract

    In the present PET study we explore some functional aspects of the interaction between attentional/control processes and learning/memory processes. The network of brain regions supporting recall of abstract designs were studied in a less practiced and in a well practiced state. The results indicate that automaticity, i.e., a decreased dependence on attentional and working memory resources, develops as a consequence of practice. This corresponds to the practice related decreases of activity in the prefrontal, anterior cingulate, and posterior parietal regions. In addition, the activity of the medial temporal regions decreased as a function of practice. This indicates an inverse relation between the strength of encoding and the activation of the MTL during retrieval. Furthermore, the pattern of practice related increases in the auditory, posterior insular-opercular extending into perisylvian supra marginal region, and the right mid occipito-temporal region, may reflect a lower degree of inhibitory attentional modulation of task irrelevant processing and more fully developed representations of the abstract designs, respectively. We also suggest that free recall is dependent on bilateral prefrontal processing, in particular non-automatic free recall. The present results cofirm previous functional neuroimaging studies of memory retrieval indicating that recall is subserved by a network of interacting brain regions. Furthermore, the results indicate that some components of the neural network subserving free recall may have a dynamic role and that there is a functional restructuring of the information processing networks during the learning process.
  • Petersson, K. M., Reis, A., Castro-Caldas, A., & Ingvar, M. (1999). Effective auditory-verbal encoding activates the left prefrontal and the medial temporal lobes: A generalization to illiterate subjects. NeuroImage, 10, 45-54. doi:10.1006/nimg.1999.0446.

    Abstract

    Recent event-related FMRI studies indicate that the prefrontal (PFC) and the medial temporal lobe (MTL) regions are more active during effective encoding than during ineffective encoding. The within-subject design and the use of well-educated young college students in these studies makes it important to replicate these results in other study populations. In this PET study, we used an auditory word-pair association cued-recall paradigm and investigated a group of healthy upper middle-aged/older illiterate women. We observed a positive correlation between cued-recall success and the regional cerebral blood flow of the left inferior PFC (BA 47) and the MTLs. Specifically, we used the cuedrecall success as a covariate in a general linear model and the results confirmed that the left inferior PFC and the MTLare more active during effective encoding than during ineffective encoding. These effects were observed during encoding of both semantically and phonologically related word pairs, indicating that these effects are robust in the studied population, that is, reproducible within group. These results generalize the results of Brewer et al. (1998, Science 281, 1185– 1187) and Wagner et al. (1998, Science 281, 1188–1191) to an upper middle aged/older illiterate population. In addition, the present study indicates that effective relational encoding correlates positively with the activity of the anterior medial temporal lobe regions.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1999). Learning-related effects and functional neuroimaging. Human Brain Mapping, 7, 234-243. doi:10.1002/(SICI)1097-0193(1999)7:4<234:AID-HBM2>3.0.CO;2-O.

    Abstract

    A fundamental problem in the study of learning is that learning-related changes may be confounded by nonspecific time effects. There are several strategies for handling this problem. This problem may be of greater significance in functional magnetic resonance imaging (fMRI) compared to positron emission tomography (PET). Using the general linear model, we describe, compare, and discuss two approaches for separating learning-related from nonspecific time effects. The first approach makes assumptions on the general behavior of nonspecific effects and explicitly models these effects, i.e., nonspecific time effects are incorporated as a linear or nonlinear confounding covariate in the statistical model. The second strategy makes no a priori assumption concerning the form of nonspecific time effects, but implicitly controls for nonspecific effects using an interaction approach, i.e., learning effects are assessed with an interaction contrast. The two approaches depend on specific assumptions and have specific limitations. With certain experimental designs, both approaches may be used and the results compared, lending particular support to effects that are independent of the method used. A third and perhaps better approach that sometimes may be practically unfeasible is to use a completely temporally balanced experimental design. The choice of approach may be of particular importance when learning related effects are studied with fMRI.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging I: Non-inferential methods and statistical models. Philosofical Transactions of the Royal Soeciety B, 354, 1239-1260.
  • Petersson, K. M., Nichols, T. E., Poline, J.-B., & Holmes, A. P. (1999). Statistical limitations in functional neuroimaging II: Signal detection and statistical inference. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 354, 1261-1282.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.
  • Petrovic, P., Kalso, E., Petersson, K. M., & Ingvar, M. (2002). Placebo and opioid analgesia - Imaging a shared neuronal network. Science, 295(5560), 1737-1740. doi:10.1126/science.1067176.

    Abstract

    It has been suggested that placebo analgesia involves both higher order cognitive networks and endogenous opioid systems. The rostral anterior cingulate cortex (rACC) and the brainstem are implicated in opioid analgesia, suggesting a similar role for these structures in placebo analgesia. Using positron emission tomography, we confirmed that both opioid and placebo analgesia are associated with increased activity in the rACC. We also observed a covariation between the activity in the rACC and the brainstem during both opioid and placebo analgesia, but not during the pain-only condition. These findings indicate a related neural mechanism in placebo and opioid analgesia.
  • Petrovic, P., Ingvar, M., Stone-Elander, S., Petersson, K. M., & Hansson, P. (1999). A PET activation study of dynamic mechanical allodynia in patients with mononeuropathy. Pain, 83, 459-470.

    Abstract

    The objective of this study was to investigate the central processing of dynamic mechanical allodynia in patients with mononeuropathy. Regional cerebral bloodflow, as an indicator of neuronal activity, was measured with positron emission tomography. Paired comparisons were made between three different states; rest, allodynia during brushing the painful skin area, and brushing of the homologous contralateral area. Bilateral activations were observed in the primary somatosensory cortex (S1) and the secondary somatosensory cortex (S2) during allodynia compared to rest. The S1 activation contralateral to the site of the stimulus was more expressed during allodynia than during innocuous touch. Significant activations of the contralateral posterior parietal cortex, the periaqueductal gray (PAG), the thalamus bilaterally and motor areas were also observed in the allodynic state compared to both non-allodynic states. In the anterior cingulate cortex (ACC) there was only a suggested activation when the allodynic state was compared with the non-allodynic states. In order to account for the individual variability in the intensity of allodynia and ongoing spontaneous pain, rCBF was regressed on the individually reported pain intensity, and significant covariations were observed in the ACC and the right anterior insula. Significantly decreased regional blood flow was observed bilaterally in the medial and lateral temporal lobe as well as in the occipital and posterior cingulate cortices when the allodynic state was compared to the non-painful conditions. This finding is consistent with previous studies suggesting attentional modulation and a central coping strategy for known and expected painful stimuli. Involvement of the medial pain system has previously been reported in patients with mononeuropathy during ongoing spontaneous pain. This study reveals a bilateral activation of the lateral pain system as well as involvement of the medial pain system during dynamic mechanical allodynia in patients with mononeuropathy.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2002). A regression analysis study of the primary somatosensory cortex during pain. NeuroImage, 16(4), 1142-1150. doi:10.1006/nimg.2002.1069.

    Abstract

    Several functional imaging studies of pain, using a number of different experimental paradigms and a variety of reference states, have failed to detect activations in the somatosensory cortices, while other imaging studies of pain have reported significant activations in these regions. The role of the somatosensory areas in pain processing has therefore been debated. In the present study the left hand was immersed in painfully cold water (standard cold pressor test) and in nonpainfully cold water during 2 min, and PET-scans were obtained either during the first or the second minute of stimulation. We observed no significant increase of activity in the somatosensory regions when the painful conditions were directly compared with the control conditions. In order to better understand the role of the primary somatosensory cortex (S1) in pain processing we used a regression analysis to study the relation between a ROI (region of interest) in the somatotopic S1-area for the stimulated hand and other regions known to be involved in pain processing. We hypothesized that although no increased activity was observed in the S1 during pain, this region would change its covariation pattern during noxious input as compared to the control stimulation if it is involved in or affected by the processing of pain. In the nonpainful cold conditions widespread regions of the ipsilateral and contralateral somatosensory cortex showed a positive covariation with the activity in the S1-ROI. However, during the first and second minute of pain this regression was significantly attenuated. During the second minute of painful stimulation there was a significant positive covariation between the activity in the S1-ROI and the other regions that are known to be involved in pain processing. Importantly, this relation was significantly stronger for the insula and the orbitofrontal cortex bilaterally when compared to the nonpainful state. The results indicate that the S1-cortex may be engaged in or affected by the processing of pain although no differential activity is observed when pain is compared with the reference condition.
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Kan, C. C., Buitelaar, J. K., & Hagoort, P. (2009). Defeasible reasoning in high-functioning adults with autism: Evidence for impaired exception-handling. Neuropsychologia, 47, 644-651. doi:10.1016/j.neuropsychologia.2008.11.011.

    Abstract

    While autism is one of the most intensively researched psychiatric disorders, little is known about reasoning skills of people with autism. The focus of this study was on defeasible inferences, that is inferences that can be revised in the light of new information. We used a behavioral task to investigate (a) conditional reasoning and (b) the suppression of conditional inferences in high-functioning adults with autism. In the suppression task a possible exception was made salient which could prevent a conclusion from being drawn. We predicted that the autism group would have difficulties dealing with such exceptions because they require mental flexibility to adjust to the context, which is often impaired in autism. The findings confirm our hypothesis that high-functioning adults with autism have a specific difficulty with exception-handling during reasoning. It is suggested that defeasible reasoning is also involved in other cognitive domains. Implications for neural underpinnings of reasoning and autism are discussed.
  • Pijnacker, J., Hagoort, P., Buitelaar, J., Teunisse, J.-P., & Geurts, B. (2009). Pragmatic inferences in high-functioning adults with autism and Asperger syndrome. Journal of Autism and Developmental Disorders, 39(4), 607-618. doi:10.1007/s10803-008-0661-8.

    Abstract

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on highfunctioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like ‘‘Some sparrows are birds’’. This sentence is logically true, but pragmatically inappropriate if the scalar implicature ‘‘Not all sparrows are birds’’ is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.
  • Poletiek, F. H. (2002). [Review of the book Adaptive thinking: Rationality in the real world by G. Gigerenzer]. Acta Psychologica, 111(3), 351-354. doi:10.1016/S0001-6918(02)00046-X.
  • Poletiek, F. H. (2002). How psychiatrists and judges assess the dangerousness of persons with mental illness: An 'expertise bias'. Behavioral Sciences & the Law, 20(1-2), 19-29. doi:10.1002/bsl.468.

    Abstract

    When assessing dangerousness of mentally ill persons with the objective of making a decision on civil commitment, medical and legal experts use information typically belonging to their professional frame of reference. This is investigated in two studies of the commitment decision. It is hypothesized that an ‘expertise bias’ may explain differences between the medical and the legal expert in defining the dangerousness concept (study 1), and in assessing the seriousness of the danger (study 2). Judges define dangerousness more often as harming others, whereas psychiatrists more often include harm to self in the definition. In assessing the seriousness of the danger, experts tend to be more tolerant with regard to false negatives, as the type of behavior is more familiar to them. The theoretical and practical implications of the results are discussed.
  • Poletiek, F. H. (2002). Implicit learning of a recursive rule in an artificial grammar. Acta Psychologica, 111(3), 323-335. doi:10.1016/S0001-6918(02)00057-4.

    Abstract

    Participants performed an artificial grammar learning task, in which the standard finite
    state grammar (J. Verb. Learn. Verb. Behavior 6 (1967) 855) was extended with a recursive
    rule generating self-embedded sequences. We studied the learnability of such a rule in two experiments.
    The results verify the general hypothesis that recursivity can be learned in an artificial
    grammar learning task. However this learning seems to be rather based on recognising
    chunks than on abstract rule induction. First, performance was better for strings with more
    than one level of self-embedding in the sequence, uncovering more clearly the self-embedding
    pattern. Second, the infinite repeatability of the recursive rule application was not spontaneously
    induced from the training, but it was when an additional cue about this possibility was
    given. Finally, participants were able to verbalise their knowledge of the fragments making up
    the sequences––especially in the crucial front and back positions––, whereas knowledge of the
    underlying structure, to the extent it was acquired, was not articulatable. The results are discussed
    in relation to previous studies on the implicit learnability of complex and abstract rules.
  • Poletiek, F. H., & Van Schijndel, T. J. P. (2009). Stimulus set size and statistical coverage of the grammar in artificial grammar learning. Psychonomic Bulletin & Review, 16(6), 1058-1064. doi:10.3758/PBR.16.6.1058.

    Abstract

    Adults and children acquire knowledge of the structure of their environment on the basis of repeated exposure to samples of structured stimuli. In the study of inductive learning, a straightforward issue is how much sample information is needed to learn the structure. The present study distinguishes between two measures for the amount of information in the sample: set size and the extent to which the set of exemplars statistically covers the underlying structure. In an artificial grammar learning experiment, learning was affected by the sample’s statistical coverage of the grammar, but not by its mere size. Our result suggests an alternative explanation of the set size effects on learning found in previous studies (McAndrews & Moscovitch, 1985; Meulemans & Van der Linden, 1997), because, as we argue, set size was confounded with statistical coverage in these studies. nt]mis|This research was supported by a grant from the Netherlands Organization for Scientific Research. We thank Jarry Porsius for his help with the data analyses.
  • Poletiek, F. H. (2009). Popper's Severity of Test as an intuitive probabilistic model of hypothesis testing. Behavioral and Brain Sciences, 32(1), 99-100. doi:10.1017/S0140525X09000454.
  • Poletiek, F. H., & Wolters, G. (2009). What is learned about fragments in artificial grammar learning? A transitional probabilities approach. Quarterly Journal of Experimental Psychology, 62(5), 868-876. doi:10.1080/17470210802511188.

    Abstract

    Learning local regularities in sequentially structured materials is typically assumed to be based on encoding of the frequencies of these regularities. We explore the view that transitional probabilities between elements of chunks, rather than frequencies of chunks, may be the primary factor in artificial grammar learning (AGL). The transitional probability model (TPM) that we propose is argued to provide an adaptive and parsimonious strategy for encoding local regularities in order to induce sequential structure from an input set of exemplars of the grammar. In a variant of the AGL procedure, in which participants estimated the frequencies of bigrams occurring in a set of exemplars they had been exposed to previously, participants were shown to be more sensitive to local transitional probability information than to mere pattern frequencies.

Share this page