Publications

Displaying 101 - 200 of 307
  • Gullberg, M., & De Bot, K. (Eds.). (2008). Gestures in language development [Special Issue]. Gesture, 8(2).
  • Gullberg, M., & McCafferty, S. G. (2008). Introduction to gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition, 30(2), 133-146. doi:10.1017/S0272263108080285.

    Abstract

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not all gesture studies embrace an integrationist view—indeed, the field applies numerous theories across various disciplines—it is nonetheless true that to study gesture is to study what has traditionally been called paralinguistic modes of interaction, with the paralinguistic label given on the assumption that gesture is not part of the core meaning of what is rendered linguistically. However, arguably, most researchers within gesture studies would maintain just the opposite: The studies presented in this special issue reflect a view whereby gesture is regarded as a central aspect of language in use, integral to how we communicate (make meaning) both with each other and with ourselves.
  • Gullberg, M., Hendriks, H., & Hickmann, M. (2008). Learning to talk and gesture about motion in French. First Language, 28(2), 200-236. doi:10.1177/0142723707088074.

    Abstract

    This study explores how French adults and children aged four and six years talk and gesture about voluntary motion, examining (1) how they encode path and manner in speech, (2) how they encode this information in accompanying gestures; and (3) whether gestures are co-expressive with speech or express other information. When path and manner are equally relevant, children’s and adults’ speech and gestures both focus on path, rather than on manner. Moreover, gestures are predominantly co-expressive with speech at all ages. However, when they are non-redundant, adults tend to gesture about path while talking about manner, whereas children gesture about both path and manner while talking about path. The discussion highlights implications for our understanding of speakers’ representations and their development.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Hagoort, P. (1989). Processing of lexical ambiguities: a comment on Milberg, Blumstein, and Dworetzky (1987). Brain and Language, 36, 335-348. doi:10.1016/0093-934X(89)90070-9.

    Abstract

    In a study by Milberg, Blumstein, and Dworetzky (1987), normal control subjects and Wernicke's and Broca's aphasics performed a lexical decision task on the third element of auditorily presented triplets of words with either a word or a nonword as target. In three of the four types of word triplets, the first and the third words were related to one or both meanings of the second word, which was semantically ambiguous. The fourth type of word triplet consisted of three unrelated, unambiguous words, functioning as baseline. Milberg et al. (1987) claim that the results for their control subjects are similar to those reported by Schvaneveldt, Meyer, and Becker's original study (1976) with the same prime types, and so interpret these as evidence for a selective lexical access of the different meanings of ambiguous words. It is argued here that Milberg et al. only partially replicate the Schvaneveldt et al. results. Moreover, the results of Milberg et al. are not fully in line with the selective access hypothesis adopted. Replication of the Milberg et al. (1987) study with Dutch materials, using both a design without and a design with repetition of the same target words for the same subjects led to the original pattern as reported by Schvaneveldt et al. (1976). In the design with four separate presentations of the same target word, a strong repetition effect was found. It is therefore argued that the discrepancy between the Milberg et al. results on the one hand, and the Schvaneveldt et al. results on the other, might be due to the absence of a control for repetition effects in the within-subject design used by Milberg et al. It is concluded that this makes the results for both normal and aphasic subjects in the latter study difficult to interpret in terms of a selective access model for normal processing.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Call, J. (2008). Imitation recognition in great apes. Current Biology, 18(7), 288-290. doi:10.1016/j.cub.2008.02.031.

    Abstract

    Human infants imitate not only to acquire skill, but also as a fundamental part of social interaction [1] , [2] and [3] . They recognise when they are being imitated by showing increased visual attention to imitators (implicit recognition) and by engaging in so-called testing behaviours (explicit recognition). Implicit recognition affords the ability to recognize structural and temporal contingencies between actions across agents, whereas explicit recognition additionally affords the ability to understand the directional impact of one's own actions on others' actions [1] , [2] and [3] . Imitation recognition is thought to foster understanding of social causality, intentionality in others and the formation of a concept of self as different from other [3] , [4] and [5] . Pigtailed macaques (Macaca nemestrina) implicitly recognize being imitated [6], but unlike chimpanzees [7], they show no sign of explicit imitation recognition. We investigated imitation recognition in 11 individuals from the four species of non-human great apes. We replicated results previously found with a chimpanzee [7] and, critically, have extended them to the other great ape species. Our results show a general prevalence of imitation recognition in all great apes and thereby demonstrate important differences between great apes and monkeys in their understanding of contingent social interactions.
  • Hayano, K. (2008). Talk and body: Negotiating action framework and social relationship in conversation. Studies in English and American Literature, 43, 187-198.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., & Carlyon, R. P. (2008). Perceptual learning of noise vocoded words: Effects of feedback and lexicality. Journal of Experimental Psychology: Human Perception and Performance, 34(2), 460-474. doi:10.1037/0096-1523.34.2.460.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Huettig, F., & Hartsuiker, R. J. (2008). When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production. Memory & Cognition, 36(2), 341-360. doi:10.3758/MC.36.2.341.

    Abstract

    Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 × 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
  • Hunley, K., Dunn, M., Lindström, E., Reesink, G., Terrill, A., Healy, M. E., Koki, G., Friedlaender, F. R., & Friedlaender, J. S. (2008). Genetic and linguistic coevolution in Northern Island Melanesia. PLoS Genetics, 4(10): e1000239. doi:10.1371/journal.pgen.1000239.

    Abstract

    Recent studies have detailed a remarkable degree of genetic and linguistic diversity in Northern Island Melanesia. Here we utilize that diversity to examine two models of genetic and linguistic coevolution. The first model predicts that genetic and linguistic correspondences formed following population splits and isolation at the time of early range expansions into the region. The second is analogous to the genetic model of isolation by distance, and it predicts that genetic and linguistic correspondences formed through continuing genetic and linguistic exchange between neighboring populations. We tested the predictions of the two models by comparing observed and simulated patterns of genetic variation, genetic and linguistic trees, and matrices of genetic, linguistic, and geographic distances. The data consist of 751 autosomal microsatellites and 108 structural linguistic features collected from 33 Northern Island Melanesian populations. The results of the tests indicate that linguistic and genetic exchange have erased any evidence of a splitting and isolation process that might have occurred early in the settlement history of the region. The correlation patterns are also inconsistent with the predictions of the isolation by distance coevolutionary process in the larger Northern Island Melanesian region, but there is strong evidence for the process in the rugged interior of the largest island in the region (New Britain). There we found some of the strongest recorded correlations between genetic, linguistic, and geographic distances. We also found that, throughout the region, linguistic features have generally been less likely to diffuse across population boundaries than genes. The results from our study, based on exceptionally fine-grained data, show that local genetic and linguistic exchange are likely to obscure evidence of the early history of a region, and that language barriers do not particularly hinder genetic exchange. In contrast, global patterns may emphasize more ancient demographic events, including population splits associated with the early colonization of major world regions.
  • Indefrey, P., & Gullberg, M. (Eds.). (2008). Time to speak: Cognitive and neural prerequisites for time in language [Special Issue]. Language Learning, 58(suppl. 1).

    Abstract

    Time is a fundamental aspect of human cognition and action. All languages have developed rich means to express various facets of time, such as bare time spans, their position on the time line, or their duration. The articles in this volume give an overview of what we know about the neural and cognitive representations of time that speakers can draw on in language. Starting with an overview of the main devices used to encode time in natural language, such as lexical elements, tense and aspect, the research presented in this volume addresses the relationship between temporal language, culture, and thought, the relationship between verb aspect and mental simulations of events, the development of temporal concepts, time perception, the storage and retrieval of temporal information in autobiographical memory, and neural correlates of tense processing and sequence planning. The psychological and neurobiological findings presented here will provide important insights to inform and extend current studies of time in language and in language acquisition.
  • Isaac, A., Schlobach, S., Matthezing, H., & Zinn, C. (2008). Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies. Library Review, 57(3), 187-199.
  • Janse, E. (2008). Spoken-word processing in aphasia: Effects of item overlap and item repetition. Brain and Language, 105, 185-198. doi:10.1016/j.bandl.2007.10.002.

    Abstract

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study was designed to compare effects of item overlap and item repetition in aphasic patients of different diagnostic types. Time compression did not interfere with lexical deactivation for the non-brain-damaged subjects. Furthermore, all aphasic patients showed immediate inhibition of co-activated candidates. These combined results show that deactivation is a fast process. Repetition effects, however, seem to arise only at the longer term in aphasic patients. Importantly, poor performance on diagnostic verbal STM tasks was shown to be related to lexical decision performance in both overlap and repetition conditions, which suggests a common underlying deficit.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Johnson, E. K., & Seidl, A. (2008). Clause segmentation by 6-month-olds: A crosslingusitic perspective. Infancy, 13, 440-455. doi:10.1080/15250000802329321.

    Abstract

    Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants’ attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically wellformed clauses and phrases may also help infants begin to extract information important for learning the grammatical structure of their language. Despite the potentially important role that the perception of large prosodic units may play in early language acquisition, there has been little work investigating the extraction of these units from fluent speech by infants learning languages other than English. We report 2 experiments investigating Dutch learners’ clause segmentation abilities.In these studies, Dutch-learning 6-month-olds readily extract clauses from speech. However, Dutch learners differ from English learners in that they seem to be more reliant on pauses to detect clause boundaries. Two closely related explanations for this finding are considered, both of which stem from the acoustic differences in clause boundary realizations in Dutch versus English.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kempen, G., & Vosse, T. (1989). Incremental syntactic tree formation in human sentence processing: A cognitive architecture based on activation decay and simulated annealing. Connection Science, 1(3), 273-290. doi:10.1080/09540098908915642.

    Abstract

    A new cognitive architecture is proposed for the syntactic aspects of human sentence processing. The architecture, called Unification Space, is biologically inspired but not based on neural nets. Instead it relies on biosynthesis as a basic metaphor. We use simulated annealing as an optimization technique which searches for the best configuration of isolated syntactic segments or subtrees in the final parse tree. The gradually decaying activation of individual syntactic nodes determines the ‘global excitation level’ of the system. This parameter serves the function of ‘computational temperature’ in simulated annealing. We have built a computer implementation of the architecture which simulates well-known sentence understanding phenomena. We report successful simulations of the psycholinguistic effects of clause embedding, minimal attachment, right association and lexical ambiguity. In addition, we simulated impaired sentence understanding as observable in agrammatic patients. Since the Unification Space allows for contextual (semantic and pragmatic) influences on the syntactic tree formation process, it belongs to the class of interactive sentence processing models.
  • Kempen, G. (1979). La mise en paroles, aspects psychologiques de l'expression orale. Études de Linguistique Appliquée, 33, 19-28.

    Abstract

    Remarques sur les facteurs intervenant dans le processus de formulation des énoncés.
  • Kempen, G. (1979). Psychologie van de zinsbouw: Een Wundtiaanse inleiding. Nederlands Tijdschrift voor de Psychologie, 34, 533-551.

    Abstract

    The psychology of language as developed by Wilhelm Wundt in his fundamental work Die Sprache (1900) has a strongly mentalistic character. The dominating positions held by behaviorism in psychology and structuralism in linguistics have overruled Wundt’s language theory to the effect that it has remained relatively unknown. This situation has changed recently under the influence of transformational linguistics and cognitive psychology. The paper discusses how Wundt applied the basic psychological concepts of apperception and association to language behavior, in particular to the construction and production of sentences during unprepared speech. The final part of the paper is devoted to the work, published in 1917, of the Dutch linguistic scholar Jacques van Ginneken, who elaborated Wundt’s ideas towards an explanation of some syntactic phenomena during the language acquisition of children.
  • Kempen, G. (1979). Woordwaarde. De Psycholoog, 14, 577.
  • Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D. J. (2008). Sentence processing in the visual and auditory modality: Do comma and prosodic break have parallel functions? Brain Research, 1224, 102-118. doi:10.1016/j.brainres.2008.05.034.

    Abstract

    Two Event-Related Potential (ERP) studies contrast the processing of locally ambiguous sentences in the visual and the auditory modality. These sentences are disambiguated by a lexical element. Before this element appears in a sentence, the sentence can also be disambiguated by a boundary marker: a comma in the visual modality, or a prosodic break in the auditory modality. Previous studies have shown that a specific ERP component, the Closure Positive Shift (CPS), can be elicited by these markers. The results of the present studies show that both the comma and the prosodic break disambiguate the ambiguous sentences before the critical lexical element, despite the fact that a clear CPS is only found in the auditory modality. Comma and prosodic break thus have parallel functions irrespective of whether they do or do not elicit a CPS.
  • Kho, K. H., Indefrey, P., Hagoort, P., Van Veelen, C. W. M., Van Rijen, P. C., & Ramsey, N. F. (2008). Unimpaired sentence comprehension after anterior temporal cortex resection. Neuropsychologia, 46(4), 1170-1178. doi:10.1016/j.neuropsychologia.2007.10.014.

    Abstract

    Functional imaging studies have demonstrated involvement of the anterior temporal cortex in sentence comprehension. It is unclear, however, whether the anterior temporal cortex is essential for this function.We studied two aspects of sentence comprehension, namely syntactic and prosodic comprehension in temporal lobe epilepsy patients who were candidates for resection of the anterior temporal lobe. Methods: Temporal lobe epilepsy patients (n = 32) with normal (left) language dominance were tested on syntactic and prosodic comprehension before and after removal of the anterior temporal cortex. The prosodic comprehension test was also compared with performance of healthy control subjects (n = 47) before surgery. Results: Overall, temporal lobe epilepsy patients did not differ from healthy controls in syntactic and prosodic comprehension before surgery. They did perform less well on an affective prosody task. Post-operative testing revealed that syntactic and prosodic comprehension did not change after removal of the anterior temporal cortex. Discussion: The unchanged performance on syntactic and prosodic comprehension after removal of the anterior temporal cortex suggests that this area is not indispensable for sentence comprehension functions in temporal epilepsy patients. Potential implications for the postulated role of the anterior temporal lobe in the healthy brain are discussed.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E., & Cameron-Faulkner, T. (2008). The acquisition of the multiple senses of with. Linguistics, 46(1), 33-61. doi:10.1515/LING.2008.002.

    Abstract

    The present article reports on an investigation of one child's acquisition of the multiple senses of the preposition with from 2;0–4;0. Two competing claims regarding children's early representation and subsequent acquisition of with were investigated. The “multiple meanings” hypothesis predicts that children form individual form-meaning pairings for with as separate lexical entries. The “monosemy approach” (McKercher 2001) claims that children apply a unitary meaning by abstracting core features early in acquisition. The child's (“Brian”) speech and his input were coded according to eight distinguishable senses of with. The results showed that Brian first acquired the senses that were most frequent in the input (accompaniment, attribute, and instrument). Less common senses took much longer to emerge. A detailed analysis of the input showed that a variety of clues are available that potentially enable the child to distinguish among high frequency senses. The acquisition data suggested that the child initially applied a restricted one-to-one form-meaning mapping for with, which is argued to reflect the spatial properties of the preposition. On the basis of these results it is argued that neither the monosemy nor the multiple meanings approach can fully explain the data, but that the results are best explained by a combination of word learning principles and children's ability to categorize the contextual properties of each sense's use in the ambient language.
  • Kidd, E., & Lum, J. A. (2008). Sex differences in past tense overregularization. Developmental Science, 11(6), 882-889. doi:10.1111/j.1467-7687.2008.00744.x.

    Abstract

    Hartshorne and Ullman (2006) presented naturalistic language data from 25 children (15 boys, 10 girls) and showed that girls produced more past tense overregularization errors than did boys. In particular, girls were more likely to overregularize irregular verbs whose stems share phonological similarities with regular verbs. It was argued that the result supported the Declarative/Procedural model of language, a neuropsychological analogue of the dual-route approach to language. In the current study we present experimental data that are inconsistent with these naturalistic data. Eighty children (40 males, 40 females) aged 5;0–6;9 completed a past tense elicitation task, a test of declarative memory, and a test of non-verbal intelligence. The results revealed no sex differences on any of the measures. Instead, the best predictors of overregularization rates were item-level features of the test verbs. We discuss the results within the context of dual versus single route debate on past tense acquisition
  • Kim, J., Davis, C., & Cutler, A. (2008). Perceptual tests of rhythmic similarity: II. Syllable rhythm. Language and Speech, 51(4), 343-359. doi:10.1177/0023830908099069.

    Abstract

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable-and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (2008). Time in language, language in time. Language Learning, 58(suppl. 1), 1-12. doi:10.1111/j.1467-9922.2008.00457.x.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W. (2008). De gustibus est disputandum! Zeitschrift für Literaturwissenschaft und Linguistik, 152, 7-24.

    Abstract

    There are two core phenomena which any empirical investigation of beauty must account for: the existence of aesthetical experience, and the enormous variability of this experience across times, cultures, people. Hence, it would seem a hopeless enterprise to determine ‘the very nature’ of beauty, and in fact, none of the many attempts from the Antiquity to present days found general acceptance. But what we should be able to investigate and understand is how properties of people, for example their varying cultural experiences, are correlated with the properties of objects which we evaluate. Beauty is neither only in the eye of the observer nor only in the objects which it sees - it is in the way in which specific observers see specific objects.
  • Klein, W. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, (152), 5-6.
  • Klein, W. (1979). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 9(33), 7-8.
  • Klein, W. (2008). Die Werke der Sprache: Für ein neues Verhältnis zwischen Literaturwissenschaft und Linguistik. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 8-32.

    Abstract

    All disciplines depend on language; but two of them also have language as an object – literary studies and linguistics. Their objectives are not the same – but they are sufficiently similar to invite close cooperation. This is not what we find; in fact, the development of research over the last decades has led to a relationship which is, in the typical case, characterised by friendly, and sometimes less friendly, ignorance and indifference. This article discusses some of the reasons for this development, and it suggests some conditions under which both sides would benefit from more cooperation.
  • Klein, W., & Franceschini, R. (Eds.). (2003). Einfache Sprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 131.
  • Klein, W., & Schnell, R. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 5-7.
  • Klein, W. (Ed.). (1989). Kindersprache [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (73).
  • Klein, W., & Schnell, R. (Eds.). (2008). Literaturwissenschaft und Linguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (150).
  • Klein, W. (1989). Introspection into what? Review of C. Faerch & G. Kaspar (Eds.) Introspection in second language research 1987. Contemporary Psychology, 34(12), 1119-1120.
  • Klein, W. (Ed.). (2008). Ist Schönheit messbar? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 152.
  • Klein, W. (Ed.). (1979). Sprache und Kontext [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (33).
  • Klein, W. (1989). Sprechen lernen - das Selbstverständlichste von der Welt: Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 73, 7-17.
  • Klein, W. (1989). Schreiben oder Lesen, aber nicht beides, oder: Vorschlag zur Wiedereinführung der Keilschrift mittels Hammer und Meißel. Zeitschrift für Literaturwissenschaft und Linguistik, 74, 116-119.
  • Klein, W. (1979). Wegauskünfte. Zeitschrift für Literaturwissenschaft und Linguistik, 33, 9-57.
  • Kuperman, V., Ernestus, M., & Baayen, R. H. (2008). Frequency distributions of uniphones, diphones, and triphones in spontaneous speech. Journal of the Acoustical Society of America, 124(6), 3897-3908. doi:10.1121/1.3006378.

    Abstract

    This paper explores the relationship between the acoustic duration of phonemic sequences and their frequencies of occurrence. The data were obtained from large (sub)corpora of spontaneous speech in Dutch, English, German, and Italian. Acoustic duration of an n-phone is shown to codetermine the n-phone's frequency of use, such that languages preferentially use diphones and triphones that are neither very long nor very short. The observed distributions are well approximated by a theoretical function that quantifies the concurrent action of the self-regulatory processes of minimization of articulatory effort and minimization of perception effort
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Languages and genes: reflections on biolinguistics and the nature-nurture question. Biolinguistics, 2(1), 114-126. Retrieved from http://www.biolinguistics.eu/index.php/biolinguistics/issue/view/7/showToc.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., & Bekkering, H. (2008). Complementary systems for understanding action intentions. Current Biology, 18, 454-457. doi:10.1016/j.cub.2008.02.057.

    Abstract

    How humans understand the intention of others’ actions remains controversial. Some authors have suggested that intentions are recognized by means of a motor simulation of the observed action with the mirror-neuron system [1–3]. Others emphasize that intention recognition is an inferential process, often called ‘‘mentalizing’’ or employing a ‘‘theory of mind,’’ which activates areas well outside the motor system [4–6]. Here, we assessed the contribution of brain regions involved in motor simulation and mentalizing for understanding action intentions via functional brain imaging. Results show that the inferior frontal gyrus (part of the mirror-neuron system) processes the intentionality of an observed action on the basis of the visual properties of the action, irrespective of whether the subject paid attention to the intention or not. Conversely, brain areas that are part of a ‘‘mentalizing’’ network become active when subjects reflect about the intentionality of an observed action, but they are largely insensitive to the visual properties of the observed action. This supports the hypothesis that motor simulation and mentalizing have distinct but complementary functions for the recognition of others’ intentions.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.
  • Lausberg, H., Kita, S., Zaidel, E., & Ptito, A. (2003). Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia, 41(10), 1317-1329. doi:10.1016/S0028-3932(03)00047-2.

    Abstract

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
  • Lausberg, H., & Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain and Language, 86(1), 57-69. doi:10.1016/S0093-934X(02)00534-5.

    Abstract

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
  • Lawson, D., Jordan, F., & Magid, K. (2008). On sex and suicide bombing: An evaluation of Kanazawa’s ‘evolutionary psychological imagination’. Journal of Evolutionary Psychology, 6(1), 73-84. doi:10.1556/JEP.2008.1002.

    Abstract

    Kanazawa (2007) proposes the ‘evolutionary psychological imagination’ (p.7) as an authoritative framework for understanding complex social and public issues. As a case study of this approach, Kanazawa addresses acts of international terrorism, specifically suicide bombings committed by Muslim men. It is proposed that a comprehensive explanation of such acts can be gained from taking an evolutionary perspective armed with only three points of cultural knowledge: 1. Muslims are exceptionally polygynous, 2. Muslim men believe they will gain reproductive access to 72 virgins if they die as a martyr and 3. Muslim men have limited access to pornography, which might otherwise relieve the tension built up from intra-sexual competition. We agree with Kanazawa that evolutionary models of human behaviour can contribute to our understanding of even the most complex social issues. However, Kanazawa’s case study, of what he refers to as ‘World War III’, rests on a flawed theoretical argument, lacks empirical backing, and holds little in the way of explanatory power.
  • Levelt, W. J. M. (1989). Hochleistung in Millisekunden: Sprechen und Sprache verstehen. Universitas, 44(511), 56-68.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C. (1989). A review of Relevance [book review of Dan Sperber & Deirdre Wilson, Relevance: communication and cognition]. Journal of Linguistics, 25, 455-472.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C. (2008). Landscape, seascape and the ontology of places on Rossel Island, Papua New Guinea. Language Sciences, 30(2/3), 256-290. doi:10.1016/j.langsci.2006.12.032.

    Abstract

    This paper describes the descriptive landscape and seascape terminology of an isolate language, Yélî Dnye, spoken on a remote island off Papua New Guinea. The terminology reveals an ontology of landscape terms fundamentally mismatching that in European languages, and in current GIS applications. These landscape terms, and a rich set of seascape terms, provide the ontological basis for toponyms across subdomains. Considering what motivates landscape categorization, three factors are considered: perceptual salience, human affordance and use, and cultural ideas. The data show that cultural ideas and practices are the major categorizing force: they directly impact the ecology with environmental artifacts, construct religious ideas which play a major role in the use of the environment and its naming, and provide abstract cultural templates which organize large portions of vocabulary across subdomains.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732-739. doi:10.1016/j.cognition.2008.06.013.

    Abstract

    In the current study we investigated whether 12-month-old infants gesture appropriately for knowledgeable versus ignorant partners, in order to provide them with needed information. In two experiments we found that in response to a searching adult, 12-month-olds pointed more often to an object whose location the adult did not know and thus needed information to find (she had not seen it fall down just previously) than to an object whose location she knew and thus did not need information to find (she had watched it fall down just previously). These results demonstrate that, in contrast to classic views of infant communication, infants’ early pointing at 12 months is already premised on an understanding of others’ knowledge and ignorance, along with a prosocial motive to help others by providing needed information.
  • Liszkowski, U. (2008). Before L1: A differentiated perspective on infant gestures. Gesture, 8(2), 180-196. doi:10.1075/gest.8.2.04lis.

    Abstract

    This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants' gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges as a transformation of deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines.
  • Liszkowski, U., Albrecht, K., Carpenter, M., & Tomasello, M. (2008). Infants’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157-167. doi:10.1016/j.infbeh.2007.10.011.
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Majid, A., Boster, J. S., & Bowerman, M. (2008). The cross-linguistic categorization of everyday events: A study of cutting and breaking. Cognition, 109(2), 235-250. doi:10.1016/j.cognition.2008.08.009.

    Abstract

    The cross-linguistic investigation of semantic categories has a long history, spanning many disciplines and covering many domains. But the extent to which semantic categories are universal or language-specific remains highly controversial. Focusing on the domain of events involving material destruction (“cutting and breaking” events, for short), this study investigates how speakers of different languages implicitly categorize such events through the verbs they use to talk about them. Speakers of 28 typologically, genetically and geographically diverse languages were asked to describe the events shown in a set of videoclips, and the distribution of their verbs across the events was analyzed with multivariate statistics. The results show that there is considerable agreement across languages in the dimensions along which cutting and breaking events are distinguished, although there is variation in the number of categories and the placement of their boundaries. This suggests that there are strong constraints in human event categorization, and that variation is played out within a restricted semantic space.
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A. (2008). Conceptual maps using multivariate statistics: Building bridges between typological linguistics and psychology [Commentary on Inferring universals from grammatical variation: Multidimensional scaling for typological analysis by William Croft and Keith T. Poole]. Theoretical Linguistics, 34(1), 59-66. doi:10.1515/THLI.2008.005.
  • Majid, A., & Huettig, F. (2008). A crosslinguistic perspective on semantic cognition [commentary on Precis of Semantic cognition: A parallel distributed approach by Timothy T. Rogers and James L. McClelland]. Behavioral and Brain Sciences, 31(6), 720-721. doi:10.1017/S0140525X08005967.

    Abstract

    Coherent covariation appears to be a powerful explanatory factor accounting for a range of phenomena in semantic cognition. But its role in accounting for the crosslinguistic facts is less clear. Variation in naming, within the same semantic domain, raises vexing questions about the necessary parameters needed to account for the basic facts underlying categorization.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Majid, A., & Levinson, S. C. (2008). Language does provide support for basic tastes [Commentary on A study of the science of taste: On the origins and influence of the core ideas by Robert P. Erickson]. Behavioral and Brain Sciences, 31, 86-87. doi:10.1017/S0140525X08003476.

    Abstract

    Recurrent lexicalization patterns across widely different cultural contexts can provide a window onto common conceptualizations. The cross-linguistic data support the idea that sweet, salt, sour, and bitter are basic tastes. In addition, umami and fatty are likely basic tastes, as well.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2008). Discourse structure and relative clause processing. Memory & Cognition, 36(1), 170-181. doi:10.3758/MC.36.1.170.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19(3), 232-240. doi:10.1111/j.1467-9280.2008.02074.x.

    Abstract

    What drives humans around the world to converge in certain ways in their naming while diverging dramatically in others? We studied how naming patterns are constrained by investigating whether labeling of human locomotion reflects the biomechanical discontinuity between walking and running gaits. Similarity judgments of a student locomoting on a treadmill at different slopes and speeds revealed perception of this discontinuity. Naming judgments of the same clips by speakers of English, Japanese, Spanish, and Dutch showed lexical distinctions between walking and running consistent with the perceived discontinuity. Typicality judgments showed that major gait terms of the four languages share goodness-of-example gradients. These data demonstrate that naming reflects the biomechanical discontinuity between walking and running and that shared elements of naming can arise from correlations among stimulus properties that are dynamic and fleeting. The results support the proposal that converging naming patterns reflect structure in the world, not only acts of construction by observers.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Martin, A. E., & McElree, B. (2008). A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3), 879-906. doi:10.1016/j.jml.2007.06.010.

    Abstract

    Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed–accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3–5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.

Share this page