Publications

Displaying 301 - 400 of 638
  • Klein, W. (Ed.). (2008). Ist Schönheit messbar? [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 152.
  • Klein, W. (1991). Raumausdrücke. Linguistische Berichte, 132, 77-114.
  • Klein, W., & Von Stutterheim, C. (1991). Text structure and referential movement. Arbeitsberichte des Forschungsprogramms S&P: Sprache und Pragmatik, 22.
  • Klein, W. (1980). Some remarks on Sanders' typology of elliptical coordinations. Linguistics, 18, 871-876.

    Abstract

    Starting with Ross (1970), various proposals have been made to classify elliptical coordinations and to characterize different languages according to the types of ellipses which they admit. Sanders (1977) discusses four of these proposals, shows that they are inadequate on various grounds and proposes a fifth typology whose central claim is 'evidently correct', as he states (p. 258). In the following, I shall briefly outline this typology and then show that it is inadequate, too. Since there is only one language 1 know — German — I will take all my examples from this language. Moreover, all examples will be straightforward and easy to be judged.
  • Klein, W. (Ed.). (1979). Sprache und Kontext [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (33).
  • Klein, W. (1980). Vorwort. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 10, 7-8.
  • Klein, W. (1991). Was kann sich die Übersetzungswissenschaft von der Linguistik erwarten? Zeitschrift für Literaturwissenschaft und Linguistik, 84, 104-123.
  • Klein, W. (1979). Wegauskünfte. Zeitschrift für Literaturwissenschaft und Linguistik, 33, 9-57.
  • Klein, W. (1999). Wie sich das deutsche Perfekt zusammensetzt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (113), 52-85.
  • Knosche, T. R., & Bastiaansen, M. C. M. (2002). On the time resolution of event-related desynchronization: A simulation study. Clinical Neurophysiology, 113(5), 754-763. doi:10.1016/S1388-2457(02)00055-X.

    Abstract

    Objectives: To investigate the time resolution of different methods for the computation of event-related desynchronization/synchronization (ERD/ERS), including one based on Hilbert transform. Methods: In order to better understand the time resolution of ERD/ERS, which is a function of factors such as the exact computation method, the frequency under study, the number of trials, and the sampling frequency, we simulated sudden changes in oscillation amplitude as well as very short and closely spaced events. Results: Hilbert-based ERD yields very similar results to ERD integrated over predefined time intervals (block ERD), if the block length is half the period length of the studied frequency. ERD predicts the onset of a change in oscillation amplitude with an error margin of only 10–30 ms. On the other hand, the time the ERD response needs to climb to its full height after a sudden change in oscillation amplitude is quite long, i.e. between 200 and 500 ms. With respect to sensitivity to short oscillatory events, the ratio between sampling frequency and electroencephalographic frequency band plays a major role. Conclusions: (1) The optimal time interval for the computation of block ERD is half a period of the frequency under investigation. (2) Due to the slow impulse response, amplitude effects in the ERD may in reality be caused by duration differences. (3) Although ERD based on the Hilbert transform does not yield any significant advantages over classical ERD in terms of time resolution, it has some important practical advantages.
  • Knösche, T. R., & Bastiaansen, M. C. M. (2001). Does the Hilbert transform improve accuracy and time resolution of ERD/ERS? Biomedizinische Technik, 46(2), 106-108.
  • Kuijpers, C., Van Donselaar, W., & Cutler, A. (2002). Perceptual effects of assimilation-induced violation of final devoicing in Dutch. In J. H. L. Hansen, & B. Pellum (Eds.), The 7th International Conference on Spoken Language Processing (pp. 1661-1664). Denver: ICSA.

    Abstract

    Voice assimilation in Dutch is an optional phonological rule which changes the surface forms of words and in doing so may violate the otherwise obligatory phonological rule of syllablefinal devoicing. We report two experiments examining the influence of voice assimilation on phoneme processing, in lexical compound words and in noun-verb phrases. Processing was not impaired in appropriate assimilation contexts across morpheme boundaries, but was impaired when devoicing was violated (a) in an inappropriate non-assimilatory) context, or (b) across a syntactic boundary.
  • Küntay, A. C., & Slobin, D. I. (2002). Putting interaction back into child language: Examples from Turkish. Psychology of Language and Communication, 6(1): 14.

    Abstract

    As in the case of other non-English languages, the study of the acquisition of Turkish has mostly focused on aspects of grammatical morphology and syntax, largely neglecting the study of the effect of interactional factors on child morphosyntax. This paper reviews indications from past research that studying input and adult-child discourse can facilitate the study of the acquisition of morphosyntax in the Turkish language. It also provides some recent studies of Turkish child language on the relationship of child-directed speech to the early acquisition of morphosyntax, and on the pragmatic features of a certain kind of discourse form in child-directed speech called variation sets.
  • Kuntay, A., & Ozyurek, A. (2002). Joint attention and the development of the use of demonstrative pronouns in Turkish. In B. Skarabela, S. Fish, & A. H. Do (Eds.), Proceedings of the 26th annual Boston University Conference on Language Development (pp. 336-347). Somerville, MA: Cascadilla Press.
  • Kuperman, V., Ernestus, M., & Baayen, R. H. (2008). Frequency distributions of uniphones, diphones, and triphones in spontaneous speech. Journal of the Acoustical Society of America, 124(6), 3897-3908. doi:10.1121/1.3006378.

    Abstract

    This paper explores the relationship between the acoustic duration of phonemic sequences and their frequencies of occurrence. The data were obtained from large (sub)corpora of spontaneous speech in Dutch, English, German, and Italian. Acoustic duration of an n-phone is shown to codetermine the n-phone's frequency of use, such that languages preferentially use diphones and triphones that are neither very long nor very short. The observed distributions are well approximated by a theoretical function that quantifies the concurrent action of the self-regulatory processes of minimization of articulatory effort and minimization of perception effort
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Languages and genes: reflections on biolinguistics and the nature-nurture question. Biolinguistics, 2(1), 114-126. Retrieved from http://www.biolinguistics.eu/index.php/biolinguistics/issue/view/7/showToc.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Vargha-Khadem, F., & Monaco, A. P. (2001). A forkhead-domain gene is mutated in a severe speech and language disorder[Letters to Nature]. Nature, 413, 519-523. doi:10.1038/35097076.

    Abstract

    Individuals affected with developmental disorders of speech and language have substantial difficulty acquiring expressive and/or receptive language in the absence of any profound sensory or neurological impairment and despite adequate intelligence and opportunity. Although studies of twins consistently indicate that a significant genetic component is involved, most families segregating speech and language deficits show complex patterns of inheritance, and a gene that predisposes individuals to such disorders has not been identified. We have studied a unique three-generation pedigree, KE, in which a severe speech and language disorder is transmitted as an autosomal-dominant monogenic trait. Our previous work mapped the locus responsible, SPCH1, to a 5.6-cM interval of region 7q31 on chromosome 7 (ref. 5). We also identified an unrelated individual, CS, in whom speech and language impairment is associated with a chromosomal translocation involving the SPCH1 interval. Here we show that the gene FOXP2, which encodes a putative transcription factor containing a polyglutamine tract and a forkhead DNA-binding domain, is directly disrupted by the translocation breakpoint in CS. In addition, we identify a point mutation in affected members of the KE family that alters an invariant amino-acid residue in the forkhead domain. Our findings suggest that FOXP2 is involved in the developmental process that culminates in speech and language
  • de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., & Bekkering, H. (2008). Complementary systems for understanding action intentions. Current Biology, 18, 454-457. doi:10.1016/j.cub.2008.02.057.

    Abstract

    How humans understand the intention of others’ actions remains controversial. Some authors have suggested that intentions are recognized by means of a motor simulation of the observed action with the mirror-neuron system [1–3]. Others emphasize that intention recognition is an inferential process, often called ‘‘mentalizing’’ or employing a ‘‘theory of mind,’’ which activates areas well outside the motor system [4–6]. Here, we assessed the contribution of brain regions involved in motor simulation and mentalizing for understanding action intentions via functional brain imaging. Results show that the inferior frontal gyrus (part of the mirror-neuron system) processes the intentionality of an observed action on the basis of the visual properties of the action, irrespective of whether the subject paid attention to the intention or not. Conversely, brain areas that are part of a ‘‘mentalizing’’ network become active when subjects reflect about the intentionality of an observed action, but they are largely insensitive to the visual properties of the observed action. This supports the hypothesis that motor simulation and mentalizing have distinct but complementary functions for the recognition of others’ intentions.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2002). Dissociation of right and left hand gesture spaces in split-brain patients. Cortex, 38(5), 883-886. doi:10.1016/S0010-9452(08)70062-5.

    Abstract

    The present study investigates hemispheric specialisation in the use of space in communicative gestures. For this purpose, we investigate split-brain patients in whom spontaneous and distinct right hand gestures can only be controlled by the left hemisphere and vice versa, the left hand only by the right hemisphere. On this anatomical basis, we can infer hemispheric specialisation from the performances of the right and left hands. In contrast to left hand dyspraxia in tasks that require language processing, split-brain patients utilise their left hands in a meaningful way in visuo-constructive tasks such as copying drawings or block-design. Therefore, we conjecture that split-brain patients are capable of using their left hands for the communication of the content of visuo-spatial animations via gestural demonstration. On this basis, we further examine the use of space in communicative gestures by the right and left hands. McNeill and Pedelty (1995) noted for the split-brain patient N.G. that her iconic right hand gestures were exclusively displayed in the right personal space. The present study investigates systematically if there is indication for neglect of the left personal space in right hand gestures in split-brain patients.
  • Lausberg, H., & Kita, S. (2001). Hemispheric specialization in nonverbal gesticulation investigated in patients with callosal disconnection. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 266-270). Paris, France: Éditions L'Harmattan.
  • Lawson, D., Jordan, F., & Magid, K. (2008). On sex and suicide bombing: An evaluation of Kanazawa’s ‘evolutionary psychological imagination’. Journal of Evolutionary Psychology, 6(1), 73-84. doi:10.1556/JEP.2008.1002.

    Abstract

    Kanazawa (2007) proposes the ‘evolutionary psychological imagination’ (p.7) as an authoritative framework for understanding complex social and public issues. As a case study of this approach, Kanazawa addresses acts of international terrorism, specifically suicide bombings committed by Muslim men. It is proposed that a comprehensive explanation of such acts can be gained from taking an evolutionary perspective armed with only three points of cultural knowledge: 1. Muslims are exceptionally polygynous, 2. Muslim men believe they will gain reproductive access to 72 virgins if they die as a martyr and 3. Muslim men have limited access to pornography, which might otherwise relieve the tension built up from intra-sexual competition. We agree with Kanazawa that evolutionary models of human behaviour can contribute to our understanding of even the most complex social issues. However, Kanazawa’s case study, of what he refers to as ‘World War III’, rests on a flawed theoretical argument, lacks empirical backing, and holds little in the way of explanatory power.
  • Ledberg, A., Fransson, P., Larsson, J., & Petersson, K. M. (2001). A 4D approach to the analysis of functional brain images: Application to fMRI data. Human Brain Mapping, 13, 185-198. doi:10.1002/hbm.1032.

    Abstract

    This paper presents a new approach to functional magnetic resonance imaging (FMRI) data analysis. The main difference lies in the view of what comprises an observation. Here we treat the data from one scanning session (comprising t volumes, say) as one observation. This is contrary to the conventional way of looking at the data where each session is treated as t different observations. Thus instead of viewing the v voxels comprising the 3D volume of the brain as the variables, we suggest the usage of the vt hypervoxels comprising the 4D volume of the brain-over-session as the variables. A linear model is fitted to the 4D volumes originating from different sessions. Parameter estimation and hypothesis testing in this model can be performed with standard techniques. The hypothesis testing generates 4D statistical images (SIs) to which any relevant test statistic can be applied. In this paper we describe two test statistics, one voxel based and one cluster based, that can be used to test a range of hypotheses. There are several benefits in treating the data from each session as one observation, two of which are: (i) the temporal characteristics of the signal can be investigated without an explicit model for the blood oxygenation level dependent (BOLD) contrast response function, and (ii) the observations (sessions) can be assumed to be independent and hence inference on the 4D SI can be made by nonparametric or Monte Carlo methods. The suggested 4D approach is applied to FMRI data and is shown to accurately detect the expected signal
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2008). Accelerating 3D medical image segmentation with high performance computing. In Proceedings of the IEEE International Workshops on Image Processing Theory, Tools and Applications - IPT (pp. 1-8).

    Abstract

    Digital processing of medical images has helped physicians and patients during past years by allowing examination and diagnosis on a very precise level. Nowadays possibly the biggest deal of support it can offer for modern healthcare is the use of high performance computing architectures to treat the huge amounts of data that can be collected by modern acquisition devices. This paper presents a parallel processing implementation of an image segmentation algorithm that operates on a computer cluster equipped with 10 processing units. Thanks to well-organized distribution of the workload we manage to significantly shorten the execution time of the developed algorithm and reach a performance gain very close to linear.
  • Levelt, W. J. M. (2002). Picture naming and word frequency: Comments on Alario, Costa and Caramazza, Language and Cognitive Processes, 17(3), 299-319. Language and Cognitive Processes, 17(6), 663-671. doi:0.1080/01690960143000443.

    Abstract

    This commentary on Alario et al. (2002) addresses two issues: (1) Different from what the authors suggest, there are no theories of production claiming the phonological word to be the upper bound of advance planning before the onset of articulation; (2) Their picture naming study of word frequency effects on speech onset is inconclusive by lack of a crucial control, viz., of object recognition latency. This is a perennial problem in picture naming studies of word frequency and age of acquisition effects
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (1999). A developmental grammar for syllable structure in the production of child language. Brain and Language, 68, 291-299.

    Abstract

    The order of acquisition of Dutch syllable types by first language learners is analyzed as following from an initial ranking and subsequent rerankings of constraints in an optimality theoretic grammar. Initially, structural constraints are all ranked above faithfulness constraints, leading to core syllable (CV) productions only. Subsequently, faithfulness gradually rises to the highest position in the ranking, allowing more and more marked syllable types to appear in production. Local conjunctions of Structural constraints allow for a more detailed analysis.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1-38. doi:10.1017/S0140525X99001776.

    Abstract

    Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER + +. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting. for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M. (1991). Die konnektionistische Mode. Sprache und Kognition, 10(2), 61-72.
  • Levelt, W. J. M. (1999). Models of word production. Trends in Cognitive Sciences, 3, 223-232.

    Abstract

    Research on spoken word production has been approached from two angles. In one research tradition, the analysis of spontaneous or induced speech errors led to models that can account for speech error distributions. In another tradition, the measurement of picture naming latencies led to chronometric models accounting for distributions of reaction times in word production. Both kinds of models are, however, dealing with the same underlying processes: (1) the speaker’s selection of a word that is semantically and syntactically appropriate; (2) the retrieval of the word’s phonological properties; (3) the rapid syllabification of the word in context; and (4) the preparation of the corresponding articulatory gestures. Models of both traditions explain these processes in terms of activation spreading through a localist, symbolic network. By and large, they share the main levels of representation: conceptual/semantic, syntactic, phonological and phonetic. They differ in various details, such as the amount of cascading and feedback in the network. These research traditions have begun to merge in recent years, leading to highly constructive experimentation. Currently, they are like two similar knives honing each other. A single pair of scissors is in the making.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). Multiple perspectives on lexical access [authors' response ]. Behavioral and Brain Sciences, 22, 61-72. doi:10.1017/S0140525X99451775.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M. (1991). Lexical access in speech production: Stages versus cascading. In H. Peters, W. Hulstijn, & C. Starkweather (Eds.), Speech motor control and stuttering (pp. 3-10). Amsterdam: Excerpta Medica.
  • Levelt, W. J. M., Schriefers, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). Normal and deviant lexical processing: Reply to Dell and O'Seaghdha. Psychological Review, 98(4), 615-618. doi:10.1037/0033-295X.98.4.615.

    Abstract

    In their comment, Dell and O'Seaghdha (1991) adduced any effect on phonological probes for semantic alternatives to the activation of these probes in the lexical network. We argue that that interpretation is false and, in addition, that the model still cannot account for our data. Furthermore, and different from Dell and O'seaghda, we adduce semantic rebound to the lemma level, where it is so substantial that it should have shown up in our data. Finally, we question the function of feedback in a lexical network (other than eliciting speech errors) and discuss Dell's (1988) notion of a unified production-comprehension system.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M., Schriefer, H., Vorberg, D., Meyer, A. S., Pechmann, T., & Havinga, J. (1991). The time course of lexical access in speech production: A study of picture naming. Psychological Review, 98(1), 122-142. doi:10.1037/0033-295X.98.1.122.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C. (2008). Landscape, seascape and the ontology of places on Rossel Island, Papua New Guinea. Language Sciences, 30(2/3), 256-290. doi:10.1016/j.langsci.2006.12.032.

    Abstract

    This paper describes the descriptive landscape and seascape terminology of an isolate language, Yélî Dnye, spoken on a remote island off Papua New Guinea. The terminology reveals an ontology of landscape terms fundamentally mismatching that in European languages, and in current GIS applications. These landscape terms, and a rich set of seascape terms, provide the ontological basis for toponyms across subdomains. Considering what motivates landscape categorization, three factors are considered: perceptual salience, human affordance and use, and cultural ideas. The data show that cultural ideas and practices are the major categorizing force: they directly impact the ecology with environmental artifacts, construct religious ideas which play a major role in the use of the environment and its naming, and provide abstract cultural templates which organize large portions of vocabulary across subdomains.
  • Levinson, S. C., & Senft, G. (1991). Forschungsgruppe für Kognitive Anthropologie - Eine neue Forschungsgruppe in der Max-Planck-Gesellschaft. Linguistische Berichte, 133, 244-246.
  • Levinson, S. C. (1999). Maxim. Journal of Linguistic Anthropology, 9, 144-147. doi:10.1525/jlin.1999.9.1-2.144.
  • Levinson, S. C., & Senft, G. (1991). Research group for cognitive anthropology - A new research group of the Max Planck Society. Cognitive Linguistics, 2, 311-312.
  • Levinson, S. C. (1980). Speech act theory: The state of the art. Language teaching and linguistics: Abstracts, 5-24.

    Abstract

    Survey article
  • Levinson, S. C. (1991). Pragmatic reduction of the Binding Conditions revisited. Journal of Linguistics, 27, 107-161. doi:10.1017/S0022226700012433.

    Abstract

    In an earlier article (Levinson, 1987b), I raised the possibility that a Gricean theory of implicature might provide a systematic partial reduction of the Binding Conditions; the briefest of outlines is given in Section 2.1 below but the argumentation will be found in the earlier article. In this article I want, first, to show how that account might be further justified and extended, but then to introduce a radical alternative. This alternative uses the same pragmatic framework, but gives an account better adjusted to some languages. Finally, I shall attempt to show that both accounts can be combined by taking a diachronic perspective. The attraction of the combined account is that, suddenly, many facts about long-range reflexives and their associated logophoricity fall into place.
  • Levinson, S. C. (1979). Pragmatics and social deixis: Reclaiming the notion of conventional implicature. In C. Chiarello (Ed.), Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society (pp. 206-223).
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732-739. doi:10.1016/j.cognition.2008.06.013.

    Abstract

    In the current study we investigated whether 12-month-old infants gesture appropriately for knowledgeable versus ignorant partners, in order to provide them with needed information. In two experiments we found that in response to a searching adult, 12-month-olds pointed more often to an object whose location the adult did not know and thus needed information to find (she had not seen it fall down just previously) than to an object whose location she knew and thus did not need information to find (she had watched it fall down just previously). These results demonstrate that, in contrast to classic views of infant communication, infants’ early pointing at 12 months is already premised on an understanding of others’ knowledge and ignorance, along with a prosocial motive to help others by providing needed information.
  • Liszkowski, U. (2008). Before L1: A differentiated perspective on infant gestures. Gesture, 8(2), 180-196. doi:10.1075/gest.8.2.04lis.

    Abstract

    This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants' gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges as a transformation of deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines.
  • Liszkowski, U., Albrecht, K., Carpenter, M., & Tomasello, M. (2008). Infants’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157-167. doi:10.1016/j.infbeh.2007.10.011.
  • Lucas, C., Griffiths, T., Xu, F., & Fawcett, C. (2008). A rational model of preference learning and choice prediction by children. In D. Koller, Y. Bengio, D. Schuurmans, L. Bottou, & A. Culotta (Eds.), Advances in Neural Information Processing Systems.

    Abstract

    Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge. We use a rational model of preference learning, drawing on ideas from economics and computer science, to explain the behavior of children in several recent experiments. Specifically, we show how a simple econometric model can be extended to capture two- to four-year-olds’ use of statistical information in inferring preferences, and their generalization of these preferences.
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Magyari, L., & De Ruiter, J. P. (2008). Timing in conversation: The anticipation of turn endings. In J. Ginzburg, P. Healey, & Y. Sato (Eds.), Proceedings of the 12th Workshop on the Semantics and Pragmatics Dialogue (pp. 139-146). London: King's college.

    Abstract

    We examined how communicators can switch between speaker and listener role with such accurate timing. During conversations, the majority of role transitions happens with a gap or overlap of only a few hundred milliseconds. This suggests that listeners can predict when the turn of the current speaker is going to end. Our hypothesis is that listeners know when a turn ends because they know how it ends. Anticipating the last words of a turn can help the next speaker in predicting when the turn will end, and also in anticipating the content of the turn, so that an appropriate response can be prepared in advance. We used the stimuli material of an earlier experiment (De Ruiter, Mitterer & Enfield, 2006), in which subjects were listening to turns from natural conversations and had to press a button exactly when the turn they were listening to ended. In the present experiment, we investigated if the subjects can complete those turns when only an initial fragment of the turn is presented to them. We found that the subjects made better predictions about the last words of those turns that had more accurate responses in the earlier button press experiment.
  • Majid, A., Boster, J. S., & Bowerman, M. (2008). The cross-linguistic categorization of everyday events: A study of cutting and breaking. Cognition, 109(2), 235-250. doi:10.1016/j.cognition.2008.08.009.

    Abstract

    The cross-linguistic investigation of semantic categories has a long history, spanning many disciplines and covering many domains. But the extent to which semantic categories are universal or language-specific remains highly controversial. Focusing on the domain of events involving material destruction (“cutting and breaking” events, for short), this study investigates how speakers of different languages implicitly categorize such events through the verbs they use to talk about them. Speakers of 28 typologically, genetically and geographically diverse languages were asked to describe the events shown in a set of videoclips, and the distribution of their verbs across the events was analyzed with multivariate statistics. The results show that there is considerable agreement across languages in the dimensions along which cutting and breaking events are distinguished, although there is variation in the number of categories and the placement of their boundaries. This suggests that there are strong constraints in human event categorization, and that variation is played out within a restricted semantic space.
  • Majid, A. (2008). Conceptual maps using multivariate statistics: Building bridges between typological linguistics and psychology [Commentary on Inferring universals from grammatical variation: Multidimensional scaling for typological analysis by William Croft and Keith T. Poole]. Theoretical Linguistics, 34(1), 59-66. doi:10.1515/THLI.2008.005.
  • Majid, A., & Huettig, F. (2008). A crosslinguistic perspective on semantic cognition [commentary on Precis of Semantic cognition: A parallel distributed approach by Timothy T. Rogers and James L. McClelland]. Behavioral and Brain Sciences, 31(6), 720-721. doi:10.1017/S0140525X08005967.

    Abstract

    Coherent covariation appears to be a powerful explanatory factor accounting for a range of phenomena in semantic cognition. But its role in accounting for the crosslinguistic facts is less clear. Variation in naming, within the same semantic domain, raises vexing questions about the necessary parameters needed to account for the basic facts underlying categorization.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Majid, A., & Levinson, S. C. (2008). Language does provide support for basic tastes [Commentary on A study of the science of taste: On the origins and influence of the core ideas by Robert P. Erickson]. Behavioral and Brain Sciences, 31, 86-87. doi:10.1017/S0140525X08003476.

    Abstract

    Recurrent lexicalization patterns across widely different cultural contexts can provide a window onto common conceptualizations. The cross-linguistic data support the idea that sweet, salt, sour, and bitter are basic tastes. In addition, umami and fatty are likely basic tastes, as well.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2008). Discourse structure and relative clause processing. Memory & Cognition, 36(1), 170-181. doi:10.3758/MC.36.1.170.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19(3), 232-240. doi:10.1111/j.1467-9280.2008.02074.x.

    Abstract

    What drives humans around the world to converge in certain ways in their naming while diverging dramatically in others? We studied how naming patterns are constrained by investigating whether labeling of human locomotion reflects the biomechanical discontinuity between walking and running gaits. Similarity judgments of a student locomoting on a treadmill at different slopes and speeds revealed perception of this discontinuity. Naming judgments of the same clips by speakers of English, Japanese, Spanish, and Dutch showed lexical distinctions between walking and running consistent with the perceived discontinuity. Typicality judgments showed that major gait terms of the four languages share goodness-of-example gradients. These data demonstrate that naming reflects the biomechanical discontinuity between walking and running and that shared elements of naming can arise from correlations among stimulus properties that are dynamic and fleeting. The results support the proposal that converging naming patterns reflect structure in the world, not only acts of construction by observers.
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, A. E., & McElree, B. (2008). A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3), 879-906. doi:10.1016/j.jml.2007.06.010.

    Abstract

    Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed–accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3–5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
  • Matsuo, A., & Duffield, N. (2002). Assessing the generality of knowledge about English ellipsis in SLA. In J. Costa, & M. J. Freitas (Eds.), Proceedings of the GALA 2001 Conference on Language Acquisition (pp. 49-53). Lisboa: Associacao Portuguesa de Linguistica.
  • Matsuo, A., & Duffield, N. (2002). Finiteness and parallelism: Assessing the generality of knowledge about English ellipsis in SLA. In B. Skarabela, S. Fish, & A.-H.-J. Do (Eds.), Proceedings of the 26th Boston University Conference on Language Development (pp. 197-207). Somerville, Massachusetts: Cascadilla Press.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • McQueen, J. M., Norris, D., & Cutler, A. (2001). Can lexical knowledge modulate prelexical representations over time? In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The results of a study on perceptual learning are reported. Dutch subjects made lexical decisions on a list of words and nonwords. Embedded in the list were either [f]- or [s]-final words in which the final fricative had been replaced by an ambiguous sound, midway between [f] and [s]. One group of listeners heard ambiguous [f]- final Dutch words like [kara?] (based on karaf, carafe) and unambiguous [s]-final words (e.g., karkas, carcase). A second group heard the reverse (e.g., ambiguous [karka?] and unambiguous karaf). After this training phase, listeners labelled ambiguous fricatives on an [f]- [s] continuum. The subjects who had heard [?] in [f]- final words categorised these fricatives as [f] reliably more often than those who had heard [?] in [s]-final words. These results suggest that speech recognition is dynamic: the system adjusts to the constraints of each particular listening situation. The lexicon can provide this adjustment process with a training signal.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., Norris, D., & Cutler, A. (1999). Lexical influence in phonetic decision-making: Evidence from subcategorical mismatches. Journal of Experimental Psychology: Human Perception and Performance, 25, 1363-1389. doi:10.1037/0096-1523.25.5.1363.

    Abstract

    In 5 experiments, listeners heard words and nonwords, some cross-spliced so that they contained acoustic-phonetic mismatches. Performance was worse on mismatching than on matching items. Words cross-spliced with words and words cross-spliced with nonwords produced parallel results. However, in lexical decision and 1 of 3 phonetic decision experiments, performance on nonwords cross-spliced with words was poorer than on nonwords cross-spliced with nonwords. A gating study confirmed that there were misleading coarticulatory cues in the cross-spliced items; a sixth experiment showed that the earlier results were not due to interitem differences in the strength of these cues. Three models of phonetic decision making (the Race model, the TRACE model, and a postlexical model) did not explain the data. A new bottom-up model is outlined that accounts for the findings in terms of lexical involvement at a dedicated decision-making stage.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S., & Bock, K. (1999). Representations and processes in the production of pronouns: Some perspectives from Dutch. Journal of Memory and Language, 41(2), 281-301. doi:doi:10.1006/jmla.1999.2649.

    Abstract

    The production and interpretation of pronouns involves the identification of a mental referent and, in connected speech or text, a discourse antecedent. One of the few overt signals of the relationship between a pronoun and its antecedent is agreement in features such as number and grammatical gender. To examine how speakers create these signals, two experiments tested conceptual, lexical. and morphophonological accounts of pronoun production in Dutch. The experiments employed sentence completion and continuation tasks with materials containing noun phrases that conflicted or agreed in grammatical gender. The noun phrases served as the antecedents for demonstrative pronouns tin Experiment 1) and relative pronouns tin Experiment 2) that required gender marking. Gender errors were used to assess the nature of the processes that established the link between pronouns and antecedents. There were more gender errors when candidate antecedents conflicted in grammatical gender, counter to the predictions of a pure conceptual hypothesis. Gender marking on candidate antecedents did not change the magnitude of this interference effect, counter to the predictions of an overt-morphology hypothesis. Mirroring previous findings about pronoun comprehension, the results suggest that speakers of gender-marking languages call on specific linguistic information about antecedents in order to select pronouns and that the information consists of specifications of grammatical gender associated with the lemmas of words.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.
  • Mitterer, H., & Ernestus, M. (2008). The link between speech perception and production is phonological and abstract: Evidence from the shadowing task. Cognition, 109(1), 168-173. doi:10.1016/j.cognition.2008.08.002.

    Abstract

    This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.
  • Mitterer, H., Yoneyama, K., & Ernestus, M. (2008). How we hear what is hardly there: Mechanisms underlying compensation for /t/-reduction in speech comprehension. Journal of Memory and Language, 59, 133-152. doi:10.1016/j.jml.2008.02.004.

    Abstract

    In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer,H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, 73–103] showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction,and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
  • Mitterer, H. (2008). How are words reduced in spontaneous speech? In A. Botonis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (pp. 165-168). Athens: University of Athens.

    Abstract

    Words are reduced in spontaneous speech. If reductions are constrained by functional (i.e., perception and production) constraints, they should not be arbitrary. This hypothesis was tested by examing the pronunciations of high- to mid-frequency words in a Dutch and a German spontaneous speech corpus. In logistic-regression models the "reduction likelihood" of a phoneme was predicted by fixed-effect predictors such as position within the word, word length, word frequency, and stress, as well as random effects such as phoneme identity and word. The models for Dutch and German show many communalities. This is in line with the assumption that similar functional constraints influence reductions in both languages.
  • Moore, R. K., & Cutler, A. (2001). Constraints on theories of human vs. machine recognition of speech. In R. Smits, J. Kingston, T. Neary, & R. Zondervan (Eds.), Proceedings of the workshop on Speech Recognition as Pattern Classification (SPRAAC) (pp. 145-150). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    The central issues in the study of speech recognition by human listeners (HSR) and of automatic speech recognition (ASR) are clearly comparable; nevertheless the research communities that concern themselves with ASR and HSR are largely distinct. This paper compares the research objectives of the two fields, and attempts to draw informative lessons from one to the other.
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.

Share this page