Publications

Displaying 101 - 200 of 271
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Call, J. (2008). Imitation recognition in great apes. Current Biology, 18(7), 288-290. doi:10.1016/j.cub.2008.02.031.

    Abstract

    Human infants imitate not only to acquire skill, but also as a fundamental part of social interaction [1] , [2] and [3] . They recognise when they are being imitated by showing increased visual attention to imitators (implicit recognition) and by engaging in so-called testing behaviours (explicit recognition). Implicit recognition affords the ability to recognize structural and temporal contingencies between actions across agents, whereas explicit recognition additionally affords the ability to understand the directional impact of one's own actions on others' actions [1] , [2] and [3] . Imitation recognition is thought to foster understanding of social causality, intentionality in others and the formation of a concept of self as different from other [3] , [4] and [5] . Pigtailed macaques (Macaca nemestrina) implicitly recognize being imitated [6], but unlike chimpanzees [7], they show no sign of explicit imitation recognition. We investigated imitation recognition in 11 individuals from the four species of non-human great apes. We replicated results previously found with a chimpanzee [7] and, critically, have extended them to the other great ape species. Our results show a general prevalence of imitation recognition in all great apes and thereby demonstrate important differences between great apes and monkeys in their understanding of contingent social interactions.
  • Hayano, K. (2008). Talk and body: Negotiating action framework and social relationship in conversation. Studies in English and American Literature, 43, 187-198.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., & Carlyon, R. P. (2008). Perceptual learning of noise vocoded words: Effects of feedback and lexicality. Journal of Experimental Psychology: Human Perception and Performance, 34(2), 460-474. doi:10.1037/0096-1523.34.2.460.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Huettig, F., & Hartsuiker, R. J. (2008). When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production. Memory & Cognition, 36(2), 341-360. doi:10.3758/MC.36.2.341.

    Abstract

    Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 × 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
  • Hunley, K., Dunn, M., Lindström, E., Reesink, G., Terrill, A., Healy, M. E., Koki, G., Friedlaender, F. R., & Friedlaender, J. S. (2008). Genetic and linguistic coevolution in Northern Island Melanesia. PLoS Genetics, 4(10): e1000239. doi:10.1371/journal.pgen.1000239.

    Abstract

    Recent studies have detailed a remarkable degree of genetic and linguistic diversity in Northern Island Melanesia. Here we utilize that diversity to examine two models of genetic and linguistic coevolution. The first model predicts that genetic and linguistic correspondences formed following population splits and isolation at the time of early range expansions into the region. The second is analogous to the genetic model of isolation by distance, and it predicts that genetic and linguistic correspondences formed through continuing genetic and linguistic exchange between neighboring populations. We tested the predictions of the two models by comparing observed and simulated patterns of genetic variation, genetic and linguistic trees, and matrices of genetic, linguistic, and geographic distances. The data consist of 751 autosomal microsatellites and 108 structural linguistic features collected from 33 Northern Island Melanesian populations. The results of the tests indicate that linguistic and genetic exchange have erased any evidence of a splitting and isolation process that might have occurred early in the settlement history of the region. The correlation patterns are also inconsistent with the predictions of the isolation by distance coevolutionary process in the larger Northern Island Melanesian region, but there is strong evidence for the process in the rugged interior of the largest island in the region (New Britain). There we found some of the strongest recorded correlations between genetic, linguistic, and geographic distances. We also found that, throughout the region, linguistic features have generally been less likely to diffuse across population boundaries than genes. The results from our study, based on exceptionally fine-grained data, show that local genetic and linguistic exchange are likely to obscure evidence of the early history of a region, and that language barriers do not particularly hinder genetic exchange. In contrast, global patterns may emphasize more ancient demographic events, including population splits associated with the early colonization of major world regions.
  • Isaac, A., Schlobach, S., Matthezing, H., & Zinn, C. (2008). Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies. Library Review, 57(3), 187-199.
  • Janse, E. (2008). Spoken-word processing in aphasia: Effects of item overlap and item repetition. Brain and Language, 105, 185-198. doi:10.1016/j.bandl.2007.10.002.

    Abstract

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study was designed to compare effects of item overlap and item repetition in aphasic patients of different diagnostic types. Time compression did not interfere with lexical deactivation for the non-brain-damaged subjects. Furthermore, all aphasic patients showed immediate inhibition of co-activated candidates. These combined results show that deactivation is a fast process. Repetition effects, however, seem to arise only at the longer term in aphasic patients. Importantly, poor performance on diagnostic verbal STM tasks was shown to be related to lexical decision performance in both overlap and repetition conditions, which suggests a common underlying deficit.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Johnson, E. K., & Seidl, A. (2008). Clause segmentation by 6-month-olds: A crosslingusitic perspective. Infancy, 13, 440-455. doi:10.1080/15250000802329321.

    Abstract

    Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants’ attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically wellformed clauses and phrases may also help infants begin to extract information important for learning the grammatical structure of their language. Despite the potentially important role that the perception of large prosodic units may play in early language acquisition, there has been little work investigating the extraction of these units from fluent speech by infants learning languages other than English. We report 2 experiments investigating Dutch learners’ clause segmentation abilities.In these studies, Dutch-learning 6-month-olds readily extract clauses from speech. However, Dutch learners differ from English learners in that they seem to be more reliant on pauses to detect clause boundaries. Two closely related explanations for this finding are considered, both of which stem from the acoustic differences in clause boundary realizations in Dutch versus English.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D. J. (2008). Sentence processing in the visual and auditory modality: Do comma and prosodic break have parallel functions? Brain Research, 1224, 102-118. doi:10.1016/j.brainres.2008.05.034.

    Abstract

    Two Event-Related Potential (ERP) studies contrast the processing of locally ambiguous sentences in the visual and the auditory modality. These sentences are disambiguated by a lexical element. Before this element appears in a sentence, the sentence can also be disambiguated by a boundary marker: a comma in the visual modality, or a prosodic break in the auditory modality. Previous studies have shown that a specific ERP component, the Closure Positive Shift (CPS), can be elicited by these markers. The results of the present studies show that both the comma and the prosodic break disambiguate the ambiguous sentences before the critical lexical element, despite the fact that a clear CPS is only found in the auditory modality. Comma and prosodic break thus have parallel functions irrespective of whether they do or do not elicit a CPS.
  • Kho, K. H., Indefrey, P., Hagoort, P., Van Veelen, C. W. M., Van Rijen, P. C., & Ramsey, N. F. (2008). Unimpaired sentence comprehension after anterior temporal cortex resection. Neuropsychologia, 46(4), 1170-1178. doi:10.1016/j.neuropsychologia.2007.10.014.

    Abstract

    Functional imaging studies have demonstrated involvement of the anterior temporal cortex in sentence comprehension. It is unclear, however, whether the anterior temporal cortex is essential for this function.We studied two aspects of sentence comprehension, namely syntactic and prosodic comprehension in temporal lobe epilepsy patients who were candidates for resection of the anterior temporal lobe. Methods: Temporal lobe epilepsy patients (n = 32) with normal (left) language dominance were tested on syntactic and prosodic comprehension before and after removal of the anterior temporal cortex. The prosodic comprehension test was also compared with performance of healthy control subjects (n = 47) before surgery. Results: Overall, temporal lobe epilepsy patients did not differ from healthy controls in syntactic and prosodic comprehension before surgery. They did perform less well on an affective prosody task. Post-operative testing revealed that syntactic and prosodic comprehension did not change after removal of the anterior temporal cortex. Discussion: The unchanged performance on syntactic and prosodic comprehension after removal of the anterior temporal cortex suggests that this area is not indispensable for sentence comprehension functions in temporal epilepsy patients. Potential implications for the postulated role of the anterior temporal lobe in the healthy brain are discussed.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E., & Cameron-Faulkner, T. (2008). The acquisition of the multiple senses of with. Linguistics, 46(1), 33-61. doi:10.1515/LING.2008.002.

    Abstract

    The present article reports on an investigation of one child's acquisition of the multiple senses of the preposition with from 2;0–4;0. Two competing claims regarding children's early representation and subsequent acquisition of with were investigated. The “multiple meanings” hypothesis predicts that children form individual form-meaning pairings for with as separate lexical entries. The “monosemy approach” (McKercher 2001) claims that children apply a unitary meaning by abstracting core features early in acquisition. The child's (“Brian”) speech and his input were coded according to eight distinguishable senses of with. The results showed that Brian first acquired the senses that were most frequent in the input (accompaniment, attribute, and instrument). Less common senses took much longer to emerge. A detailed analysis of the input showed that a variety of clues are available that potentially enable the child to distinguish among high frequency senses. The acquisition data suggested that the child initially applied a restricted one-to-one form-meaning mapping for with, which is argued to reflect the spatial properties of the preposition. On the basis of these results it is argued that neither the monosemy nor the multiple meanings approach can fully explain the data, but that the results are best explained by a combination of word learning principles and children's ability to categorize the contextual properties of each sense's use in the ambient language.
  • Kidd, E., & Lum, J. A. (2008). Sex differences in past tense overregularization. Developmental Science, 11(6), 882-889. doi:10.1111/j.1467-7687.2008.00744.x.

    Abstract

    Hartshorne and Ullman (2006) presented naturalistic language data from 25 children (15 boys, 10 girls) and showed that girls produced more past tense overregularization errors than did boys. In particular, girls were more likely to overregularize irregular verbs whose stems share phonological similarities with regular verbs. It was argued that the result supported the Declarative/Procedural model of language, a neuropsychological analogue of the dual-route approach to language. In the current study we present experimental data that are inconsistent with these naturalistic data. Eighty children (40 males, 40 females) aged 5;0–6;9 completed a past tense elicitation task, a test of declarative memory, and a test of non-verbal intelligence. The results revealed no sex differences on any of the measures. Instead, the best predictors of overregularization rates were item-level features of the test verbs. We discuss the results within the context of dual versus single route debate on past tense acquisition
  • Kim, J., Davis, C., & Cutler, A. (2008). Perceptual tests of rhythmic similarity: II. Syllable rhythm. Language and Speech, 51(4), 343-359. doi:10.1177/0023830908099069.

    Abstract

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable-and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (2008). Time in language, language in time. Language Learning, 58(suppl. 1), 1-12. doi:10.1111/j.1467-9922.2008.00457.x.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W. (2008). De gustibus est disputandum! Zeitschrift für Literaturwissenschaft und Linguistik, 152, 7-24.

    Abstract

    There are two core phenomena which any empirical investigation of beauty must account for: the existence of aesthetical experience, and the enormous variability of this experience across times, cultures, people. Hence, it would seem a hopeless enterprise to determine ‘the very nature’ of beauty, and in fact, none of the many attempts from the Antiquity to present days found general acceptance. But what we should be able to investigate and understand is how properties of people, for example their varying cultural experiences, are correlated with the properties of objects which we evaluate. Beauty is neither only in the eye of the observer nor only in the objects which it sees - it is in the way in which specific observers see specific objects.
  • Klein, W. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, (152), 5-6.
  • Klein, W. (2008). Die Werke der Sprache: Für ein neues Verhältnis zwischen Literaturwissenschaft und Linguistik. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 8-32.

    Abstract

    All disciplines depend on language; but two of them also have language as an object – literary studies and linguistics. Their objectives are not the same – but they are sufficiently similar to invite close cooperation. This is not what we find; in fact, the development of research over the last decades has led to a relationship which is, in the typical case, characterised by friendly, and sometimes less friendly, ignorance and indifference. This article discusses some of the reasons for this development, and it suggests some conditions under which both sides would benefit from more cooperation.
  • Klein, W., & Schnell, R. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 5-7.
  • Kuperman, V., Ernestus, M., & Baayen, R. H. (2008). Frequency distributions of uniphones, diphones, and triphones in spontaneous speech. Journal of the Acoustical Society of America, 124(6), 3897-3908. doi:10.1121/1.3006378.

    Abstract

    This paper explores the relationship between the acoustic duration of phonemic sequences and their frequencies of occurrence. The data were obtained from large (sub)corpora of spontaneous speech in Dutch, English, German, and Italian. Acoustic duration of an n-phone is shown to codetermine the n-phone's frequency of use, such that languages preferentially use diphones and triphones that are neither very long nor very short. The observed distributions are well approximated by a theoretical function that quantifies the concurrent action of the self-regulatory processes of minimization of articulatory effort and minimization of perception effort
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Languages and genes: reflections on biolinguistics and the nature-nurture question. Biolinguistics, 2(1), 114-126. Retrieved from http://www.biolinguistics.eu/index.php/biolinguistics/issue/view/7/showToc.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., & Bekkering, H. (2008). Complementary systems for understanding action intentions. Current Biology, 18, 454-457. doi:10.1016/j.cub.2008.02.057.

    Abstract

    How humans understand the intention of others’ actions remains controversial. Some authors have suggested that intentions are recognized by means of a motor simulation of the observed action with the mirror-neuron system [1–3]. Others emphasize that intention recognition is an inferential process, often called ‘‘mentalizing’’ or employing a ‘‘theory of mind,’’ which activates areas well outside the motor system [4–6]. Here, we assessed the contribution of brain regions involved in motor simulation and mentalizing for understanding action intentions via functional brain imaging. Results show that the inferior frontal gyrus (part of the mirror-neuron system) processes the intentionality of an observed action on the basis of the visual properties of the action, irrespective of whether the subject paid attention to the intention or not. Conversely, brain areas that are part of a ‘‘mentalizing’’ network become active when subjects reflect about the intentionality of an observed action, but they are largely insensitive to the visual properties of the observed action. This supports the hypothesis that motor simulation and mentalizing have distinct but complementary functions for the recognition of others’ intentions.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.
  • Lausberg, H., Kita, S., Zaidel, E., & Ptito, A. (2003). Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia, 41(10), 1317-1329. doi:10.1016/S0028-3932(03)00047-2.

    Abstract

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
  • Lausberg, H., & Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain and Language, 86(1), 57-69. doi:10.1016/S0093-934X(02)00534-5.

    Abstract

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
  • Lawson, D., Jordan, F., & Magid, K. (2008). On sex and suicide bombing: An evaluation of Kanazawa’s ‘evolutionary psychological imagination’. Journal of Evolutionary Psychology, 6(1), 73-84. doi:10.1556/JEP.2008.1002.

    Abstract

    Kanazawa (2007) proposes the ‘evolutionary psychological imagination’ (p.7) as an authoritative framework for understanding complex social and public issues. As a case study of this approach, Kanazawa addresses acts of international terrorism, specifically suicide bombings committed by Muslim men. It is proposed that a comprehensive explanation of such acts can be gained from taking an evolutionary perspective armed with only three points of cultural knowledge: 1. Muslims are exceptionally polygynous, 2. Muslim men believe they will gain reproductive access to 72 virgins if they die as a martyr and 3. Muslim men have limited access to pornography, which might otherwise relieve the tension built up from intra-sexual competition. We agree with Kanazawa that evolutionary models of human behaviour can contribute to our understanding of even the most complex social issues. However, Kanazawa’s case study, of what he refers to as ‘World War III’, rests on a flawed theoretical argument, lacks empirical backing, and holds little in the way of explanatory power.
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C. (2008). Landscape, seascape and the ontology of places on Rossel Island, Papua New Guinea. Language Sciences, 30(2/3), 256-290. doi:10.1016/j.langsci.2006.12.032.

    Abstract

    This paper describes the descriptive landscape and seascape terminology of an isolate language, Yélî Dnye, spoken on a remote island off Papua New Guinea. The terminology reveals an ontology of landscape terms fundamentally mismatching that in European languages, and in current GIS applications. These landscape terms, and a rich set of seascape terms, provide the ontological basis for toponyms across subdomains. Considering what motivates landscape categorization, three factors are considered: perceptual salience, human affordance and use, and cultural ideas. The data show that cultural ideas and practices are the major categorizing force: they directly impact the ecology with environmental artifacts, construct religious ideas which play a major role in the use of the environment and its naming, and provide abstract cultural templates which organize large portions of vocabulary across subdomains.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732-739. doi:10.1016/j.cognition.2008.06.013.

    Abstract

    In the current study we investigated whether 12-month-old infants gesture appropriately for knowledgeable versus ignorant partners, in order to provide them with needed information. In two experiments we found that in response to a searching adult, 12-month-olds pointed more often to an object whose location the adult did not know and thus needed information to find (she had not seen it fall down just previously) than to an object whose location she knew and thus did not need information to find (she had watched it fall down just previously). These results demonstrate that, in contrast to classic views of infant communication, infants’ early pointing at 12 months is already premised on an understanding of others’ knowledge and ignorance, along with a prosocial motive to help others by providing needed information.
  • Liszkowski, U. (2008). Before L1: A differentiated perspective on infant gestures. Gesture, 8(2), 180-196. doi:10.1075/gest.8.2.04lis.

    Abstract

    This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants' gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges as a transformation of deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines.
  • Liszkowski, U., Albrecht, K., Carpenter, M., & Tomasello, M. (2008). Infants’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157-167. doi:10.1016/j.infbeh.2007.10.011.
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Majid, A., Boster, J. S., & Bowerman, M. (2008). The cross-linguistic categorization of everyday events: A study of cutting and breaking. Cognition, 109(2), 235-250. doi:10.1016/j.cognition.2008.08.009.

    Abstract

    The cross-linguistic investigation of semantic categories has a long history, spanning many disciplines and covering many domains. But the extent to which semantic categories are universal or language-specific remains highly controversial. Focusing on the domain of events involving material destruction (“cutting and breaking” events, for short), this study investigates how speakers of different languages implicitly categorize such events through the verbs they use to talk about them. Speakers of 28 typologically, genetically and geographically diverse languages were asked to describe the events shown in a set of videoclips, and the distribution of their verbs across the events was analyzed with multivariate statistics. The results show that there is considerable agreement across languages in the dimensions along which cutting and breaking events are distinguished, although there is variation in the number of categories and the placement of their boundaries. This suggests that there are strong constraints in human event categorization, and that variation is played out within a restricted semantic space.
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A. (2008). Conceptual maps using multivariate statistics: Building bridges between typological linguistics and psychology [Commentary on Inferring universals from grammatical variation: Multidimensional scaling for typological analysis by William Croft and Keith T. Poole]. Theoretical Linguistics, 34(1), 59-66. doi:10.1515/THLI.2008.005.
  • Majid, A., & Huettig, F. (2008). A crosslinguistic perspective on semantic cognition [commentary on Precis of Semantic cognition: A parallel distributed approach by Timothy T. Rogers and James L. McClelland]. Behavioral and Brain Sciences, 31(6), 720-721. doi:10.1017/S0140525X08005967.

    Abstract

    Coherent covariation appears to be a powerful explanatory factor accounting for a range of phenomena in semantic cognition. But its role in accounting for the crosslinguistic facts is less clear. Variation in naming, within the same semantic domain, raises vexing questions about the necessary parameters needed to account for the basic facts underlying categorization.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Majid, A., & Levinson, S. C. (2008). Language does provide support for basic tastes [Commentary on A study of the science of taste: On the origins and influence of the core ideas by Robert P. Erickson]. Behavioral and Brain Sciences, 31, 86-87. doi:10.1017/S0140525X08003476.

    Abstract

    Recurrent lexicalization patterns across widely different cultural contexts can provide a window onto common conceptualizations. The cross-linguistic data support the idea that sweet, salt, sour, and bitter are basic tastes. In addition, umami and fatty are likely basic tastes, as well.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2008). Discourse structure and relative clause processing. Memory & Cognition, 36(1), 170-181. doi:10.3758/MC.36.1.170.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19(3), 232-240. doi:10.1111/j.1467-9280.2008.02074.x.

    Abstract

    What drives humans around the world to converge in certain ways in their naming while diverging dramatically in others? We studied how naming patterns are constrained by investigating whether labeling of human locomotion reflects the biomechanical discontinuity between walking and running gaits. Similarity judgments of a student locomoting on a treadmill at different slopes and speeds revealed perception of this discontinuity. Naming judgments of the same clips by speakers of English, Japanese, Spanish, and Dutch showed lexical distinctions between walking and running consistent with the perceived discontinuity. Typicality judgments showed that major gait terms of the four languages share goodness-of-example gradients. These data demonstrate that naming reflects the biomechanical discontinuity between walking and running and that shared elements of naming can arise from correlations among stimulus properties that are dynamic and fleeting. The results support the proposal that converging naming patterns reflect structure in the world, not only acts of construction by observers.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Martin, A. E., & McElree, B. (2008). A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3), 879-906. doi:10.1016/j.jml.2007.06.010.

    Abstract

    Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed–accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3–5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.
  • Mitterer, H., & Ernestus, M. (2008). The link between speech perception and production is phonological and abstract: Evidence from the shadowing task. Cognition, 109(1), 168-173. doi:10.1016/j.cognition.2008.08.002.

    Abstract

    This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.
  • Mitterer, H., Yoneyama, K., & Ernestus, M. (2008). How we hear what is hardly there: Mechanisms underlying compensation for /t/-reduction in speech comprehension. Journal of Memory and Language, 59, 133-152. doi:10.1016/j.jml.2008.02.004.

    Abstract

    In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer,H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, 73–103] showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction,and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • Obleser, J., Eisner, F., & Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32), 8116-8124. doi:doi:10.1523/JNEUROSCI.1290-08.2008.

    Abstract

    Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Poletiek, F. H. (2008). Het probleem van escalerende beschuldigingen [Boekbespreking van Kindermishandeling door H. Crombag en den Hartog]. Maandblad voor Geestelijke Volksgezondheid, (2), 163-166.
  • Proios, H., Asaridou, S. S., & Brugger, P. (2008). Random number generation in patients with aphasia: A test of executive functions. Acta Neuropsychologica, 6(2), 157-168.

    Abstract

    Randomization performance was studied using the "Mental Dice Task" in 20 patients with aphasia (APH) and 101 elderly normal control subjects (NC). The produced sequences were compared to 100 computer-generated pseudorandom sequences with respect to 7 measures of sequential bias. The performance of APH differed significantly from NC participants, according to all but one measure, i.e. Turning Point Index (points of change between ascending and descending sequences). NC participants differed significantly from the computer generated sequences, according to all measures of randomness. Finally, APH differed significantly from the computer simulator, according to all measures but mean Repetition Gap score (gap between a digit and its reoccurrence). Despite the heterogeneity of our APH group, there were no significant differences in randomization performance between patients with different language impairments. All the APH displayed a distinct performance profile, with more response stereotypy, counting tendencies, and inhibition problems, as hypothesised, while at the same time responding more randomly than NC by showing less of a cycling strategy and more number repetitions.
  • Rapold, C. J., & Widlok, T. (2008). Dimensions of variability in Northern Khoekhoe language and culture. Southern African Humanities, 20, 133-161. Retrieved from http://www.sahumanities.org.za/RapoldWidlok_203.aspx.

    Abstract

    This article takes an interdisciplinary route towards explaining the complex history of Hai//om culture and language. We begin this article with a short review of ideas relating to 'origins' and historical reconstructions as they are currently played out among Khoekhoe groups in Namibia, in particular with regard to the Hai//om. We then take a comparative look at parts of the kinship system and the tonology of ≠Âkhoe Hai//om and other variants of Khoekhoe. With regard to the kinship and naming system, we see patterns that show similarities with Nama and Damara on the one hand but also with 'San' groups on the other hand. With regard to tonology, new data from three northern Khoekoe varieties shows similarities as well as differences with Standard Namibian Khoekhoe and Ju and Tuu varieties. The historical scenarios that might explain these facts suggest different centres of innovations and opposite directions of diffusion. The anthropological and linguistic data demonstrates that only a fine-grained and multi-layered approach that goes far beyond any simplistic dichotomies can do justice to the Hai//om riddle.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Roberts, L., Gullberg, M., & Indefrey, P. (2008). Online pronoun resolution in L2 discourse: L1 influence and general learner effects. Studies in Second Language Acquisition, 30(3), 333-357. doi:10.1017/S0272263108080480.

    Abstract

    This study investigates whether advanced second language (L2) learners of a nonnull subject language (Dutch) are influenced by their null subject first language (L1) (Turkish) in their offline and online resolution of subject pronouns in L2 discourse. To tease apart potential L1 effects from possible general L2 processing effects, we also tested a group of German L2 learners of Dutch who were predicted to perform like the native Dutch speakers. The two L2 groups differed in their offline interpretations of subject pronouns. The Turkish L2 learners exhibited a L1 influence, because approximately half the time they interpreted Dutch subject pronouns as they would overt pronouns in Turkish, whereas the German L2 learners performed like the Dutch controls, interpreting pronouns as coreferential with the current discourse topic. This L1 effect was not in evidence in eye-tracking data, however. Instead, the L2 learners patterned together, showing an online processing disadvantage when two potential antecedents for the pronoun were grammatically available in the discourse. This processing disadvantage was in evidence irrespective of the properties of the learners' L1 or their final interpretation of the pronoun. Therefore, the results of this study indicate both an effect of the L1 on the L2 in offline resolution and a general L2 processing effect in online subject pronoun resolution.
  • Roberts, L. (2008). Processing temporal constraints and some implications for the investigation of second language sentence processing and acquisition. Commentary on Baggio. Language Learning, 58(suppl. 1), 57-61. doi:10.1111/j.1467-9922.2008.00461.x.
  • Roby, A. C., & Kidd, E. (2008). The referential communication skills of children with imaginary companions. Developmental Science, 11(4), 531-40. doi:10.1111/j.1467-7687.2008.00699.x.

    Abstract

    he present study investigated the referential communication skills of children with imaginary companions (ICs). Twenty-two children with ICs aged between 4 and 6 years were compared to 22 children without ICs (NICs). The children were matched for age, gender, birth order, number of siblings, and parental education. All children completed the Test of Referential Commu- nication (Camaioni, Ercolani & Lloyd, 1995). The results showed that the children with ICs performed better than the children without ICs on the speaker component of the task. In particular, the IC children were better able to identify a specific referen t to their interlocutor than were the NIC children. Furthermore, the IC children described less redundant features of the target picture than did the NIC children. The children did not differ in the listening comprehension component of the task. Overall, the results suggest that the IC children had a better understanding of their interlocutor’s information requirements in convers ation. The role of pretend play in the development of communicative competence is discussed in light of these results.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.

Share this page