Publications

Displaying 601 - 700 of 1014
  • Levinson, S. C. (2008). Space in language and cognition. Singapore: Word Publishing Company/CUP.

    Abstract

    Chinese translation of the 2003 publication.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2011). Pojmowanie przestrzeni w różnych kulturach [Polish translation of Levinson, S. C. 1998. Studying spatial conceptualization across cultures]. Autoportret, 33, 16-23.

    Abstract

    Polish translation of Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7
  • Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press.
  • Levinson, S. C. (1979). Pragmatics and social deixis: Reclaiming the notion of conventional implicature. In C. Chiarello (Ed.), Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society (pp. 206-223).
  • Levinson, S. C., & Majid, A. (2008). Preface and priorities. In A. Majid (Ed.), Field manual volume 11 (pp. iii-iv). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C. (2011). Three levels of meaning: Essays in honor of Sir John Lyons [Reprint]. In A. Kasher (Ed.), Pragmatics II. London: Routledge.

    Abstract

    Reprint from Stephen C. Levinson, ‘Three Levels of Meaning’, in Frank Palmer (ed.), Grammar and Meaning: Essays in Honor of Sir John Lyons (Cambridge University Press, 1995), pp. 90–115
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2008). Time and space questionnaire. In A. Majid (Ed.), Field Manual Volume 11 (pp. 42-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492955.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C., Greenhill, S. J., Gray, R. D., & Dunn, M. (2011). Universal typological dependencies should be detectable in the history of language families. Linguistic Typology, 15, 509-534. doi:10.1515/LITY.2011.034.

    Abstract

    1. Introduction We claim that making sense of the typological diversity of languages demands a historical/evolutionary approach.We are pleased that the target paper (Dunn et al. 2011a) has served to bring discussion of this claim into prominence, and are grateful that leading typologists have taken the time to respond (commentaries denoted by boldface). It is unfortunate though that a number of the commentaries in this issue of LT show significant misunderstandings of our paper. Donohue thinks we were out to show the stability of typological features, but that was not our target at all (although related methods can be used to do that: see, e.g., Greenhill et al. 2010a, Dediu 2011a). Plank seems to think we were arguing against universals of any type, but our target was in fact just the implicational universals of word order that have been the bread and butter of typology. He also seems to think we ignore diachrony, whereas in fact the method introduces diachrony centrally into typological reasoning, thereby potentially revolutionising typology (see Cysouw’s commentary). Levy & Daumé think we were testing for lineage-specificity, whereas that was in fact an outcome (the main finding) of our testing for correlated evolution. Dryer thinks we must account for the distribution of language types around the world, but that was not our aim: our aim was to test the causal connection between linguistic variables by taking the perspective of language evolution (diversification and change). Longobardi & Roberts seem to think we set out to extract family trees from syntactic features, but our goal was in fact to use trees based on lexical cognates and hang reconstructed syntactic states on each node of these trees, thereby reconstructing the processes of language change.
  • Levinson, S. C. (2011). Universals in pragmatics. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 654-657). New York: Cambridge University Press.

    Abstract

    Changing Prospects for Universals in Pragmatics
    The term PRAGMATICS has come to denote the study of general principles of language use. It is usually understood to contrast with SEMANTICS, the study of encoded meaning, and also, by some authors, to contrast with SOCIOLINGUISTICS
    and the ethnography of speaking, which are more concerned with local sociocultural practices. Given that pragmaticists come from disciplines as varied as philosophy, sociology,
    linguistics, communication studies, psychology, and anthropology, it is not surprising that definitions of pragmatics vary. Nevertheless, most authors agree on a list of topics
    that come under the rubric, including DEIXIS, PRESUPPOSITION,
    implicature (see CONVERSATIONAL IMPLICATURE), SPEECH-ACTS, and conversational organization (see CONVERSATIONAL ANALYSIS). Here, we can use this extensional definition as a starting point (Levinson 1988; Huang 2007).
  • Lindell, A. K., & Kidd, E. (2011). Why right-brain teaching is half-witted: A critique of the misapplication of neuroscience to education. Mind, Brain and Education, 5(3), 121-127. doi:10.1111/j.1751-228X.2011.01120.x.

    Abstract

    Educational tools claiming to use “right-brain techniques” are increasingly shaping school curricula. By implying a strong scientific basis, such approaches appeal to educators who rightly believe that knowledge of the brain should guide curriculum development. However, the notion of hemisphericity (idea that people are “left-brained” or “right-brained”) is a neuromyth that was debunked in the scientific literature 25 years ago. This article challenges the validity of “right-brain” teaching, highlighting the fact that neuroscientific research does not support its claims. Providing teachers with a basic understanding of neuroscience research as part of teacher training would enable more effective evaluation of brain-based claims and facilitate the adoption of tools validated by rigorous independent research rather than programs based on pseudoscience.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732-739. doi:10.1016/j.cognition.2008.06.013.

    Abstract

    In the current study we investigated whether 12-month-old infants gesture appropriately for knowledgeable versus ignorant partners, in order to provide them with needed information. In two experiments we found that in response to a searching adult, 12-month-olds pointed more often to an object whose location the adult did not know and thus needed information to find (she had not seen it fall down just previously) than to an object whose location she knew and thus did not need information to find (she had watched it fall down just previously). These results demonstrate that, in contrast to classic views of infant communication, infants’ early pointing at 12 months is already premised on an understanding of others’ knowledge and ignorance, along with a prosocial motive to help others by providing needed information.
  • Liszkowski, U. (2008). Before L1: A differentiated perspective on infant gestures. Gesture, 8(2), 180-196. doi:10.1075/gest.8.2.04lis.

    Abstract

    This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants' gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges as a transformation of deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines.
  • Liszkowski, U., & Tomasello, M. (2011). Individual differences in social, cognitive, and morphological aspects of infant pointing. Cognitive Development, 26, 16-29. doi:10.1016/j.cogdev.2010.10.001.

    Abstract

    Little is known about the origins of the pointing gesture. We sought to gain insight into its emergence by investigating individual differences in the pointing of 12-month-old infants in two ways. First, we looked at differences in the communicative and interactional uses of pointing and asked how different hand shapes relate to point frequency, accompanying vocalizations, and mothers’ pointing. Second, we looked at differences in social-cognitive skills of point comprehension and imitation and tested whether these were related to infants’ own pointing. Infants’ and mothers’ spontaneous pointing correlated with one another, as did infants’ point production and comprehension. In particular, infants’ index-finger pointing had a profile different from simple whole-hand pointing. It was more frequent, it was more often accompanied by vocalizations, and it correlated more strongly with comprehension of pointing (especially to occluded referents). We conclude that whole-hand and index-finger pointing differ qualitatively and suggest that it is index-finger pointing that first embodies infants’ understanding of communicative intentions.
  • Liszkowski, U., Albrecht, K., Carpenter, M., & Tomasello, M. (2008). Infants’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157-167. doi:10.1016/j.infbeh.2007.10.011.
  • Liszkowski, U. (2011). Three lines in the emergence of prelinguistic communication and social cognition. Journal of cognitive education and psychology, 10(1), 32-43. doi:10.1891/1945-8959.10.1.32.

    Abstract

    Sociocultural theories of development posit that higher cognitive functions emerge through socially mediated processes, in particular through language. However, theories of human communication posit that language itself is based on higher social cognitive skills and cooperative motivations. Prelinguistic communication is a test case to this puzzle. In the current review, I first present recent and new findings of a research program on prelinguistic infants’ ommunication skills. This research provides empirical evidence for a rich social cognitive and motivational basis of human communication before language. Next, I discuss the emergence of these foundational skills. By considering all three lines of development, and by drawing on new findings from phylogenetic and cross-cultural comparisons, this article discusses the possibility that the cognitive foundations of prelinguistic communication are, in turn, mediated by social interactional input and shared experiences.
  • Lucas, C., Griffiths, T., Xu, F., & Fawcett, C. (2008). A rational model of preference learning and choice prediction by children. In D. Koller, Y. Bengio, D. Schuurmans, L. Bottou, & A. Culotta (Eds.), Advances in Neural Information Processing Systems.

    Abstract

    Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge. We use a rational model of preference learning, drawing on ideas from economics and computer science, to explain the behavior of children in several recent experiments. Specifically, we show how a simple econometric model can be extended to capture two- to four-year-olds’ use of statistical information in inferring preferences, and their generalization of these preferences.
  • Mace, R., & Jordan, F. (2011). Macro-evolutionary studies of cultural diversity: A review of empirical studies of cultural transmission and cultural adaptation. Philosophical Transactions of the Royal Society of London B, Biological Sciences, 366, 402-411. doi:10.1098/rstb.2010.0238.

    Abstract

    A growing body of theoretical and empirical research has examined cultural transmission and adaptive cultural behaviour at the individual, within-group level. However, relatively few studies have tried to examine proximate transmission or test ultimate adaptive hypotheses about behavioural or cultural diversity at a between-societies macro-level. In both the history of anthropology and in present-day work, a common approach to examining adaptive behaviour at the macro-level has been through correlating various cultural traits with features of ecology. We discuss some difficulties with simple ecological associations, and then review cultural phylogenetic studies that have attempted to go beyond correlations to understand the underlying cultural evolutionary processes. We conclude with an example of a phylogenetically controlled approach to understanding proximate transmission pathways in Austronesian cultural diversity.
  • Magyari, L., & De Ruiter, J. P. (2008). Timing in conversation: The anticipation of turn endings. In J. Ginzburg, P. Healey, & Y. Sato (Eds.), Proceedings of the 12th Workshop on the Semantics and Pragmatics Dialogue (pp. 139-146). London: King's college.

    Abstract

    We examined how communicators can switch between speaker and listener role with such accurate timing. During conversations, the majority of role transitions happens with a gap or overlap of only a few hundred milliseconds. This suggests that listeners can predict when the turn of the current speaker is going to end. Our hypothesis is that listeners know when a turn ends because they know how it ends. Anticipating the last words of a turn can help the next speaker in predicting when the turn will end, and also in anticipating the content of the turn, so that an appropriate response can be prepared in advance. We used the stimuli material of an earlier experiment (De Ruiter, Mitterer & Enfield, 2006), in which subjects were listening to turns from natural conversations and had to press a button exactly when the turn they were listening to ended. In the present experiment, we investigated if the subjects can complete those turns when only an initial fragment of the turn is presented to them. We found that the subjects made better predictions about the last words of those turns that had more accurate responses in the earlier button press experiment.
  • Magyari, L. (2008). A mentális lexikon modelljei és a magyar nyelv (Models of mental lexicon and the Hungarian language). In J. Gervain, & C. Pléh (Eds.), A láthatatlan nyelv (Invisible Language). Budapest: Gondolat Kiadó.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2008). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 11 (pp. 37-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492960.

    Abstract

    This Field Manual entry has been superceded by the 2009 version:
    https://doi.org/10.17617/2.883570

    Files private

    Request files
  • Majid, A., Boster, J. S., & Bowerman, M. (2008). The cross-linguistic categorization of everyday events: A study of cutting and breaking. Cognition, 109(2), 235-250. doi:10.1016/j.cognition.2008.08.009.

    Abstract

    The cross-linguistic investigation of semantic categories has a long history, spanning many disciplines and covering many domains. But the extent to which semantic categories are universal or language-specific remains highly controversial. Focusing on the domain of events involving material destruction (“cutting and breaking” events, for short), this study investigates how speakers of different languages implicitly categorize such events through the verbs they use to talk about them. Speakers of 28 typologically, genetically and geographically diverse languages were asked to describe the events shown in a set of videoclips, and the distribution of their verbs across the events was analyzed with multivariate statistics. The results show that there is considerable agreement across languages in the dimensions along which cutting and breaking events are distinguished, although there is variation in the number of categories and the placement of their boundaries. This suggests that there are strong constraints in human event categorization, and that variation is played out within a restricted semantic space.
  • Majid, A. (2008). Conceptual maps using multivariate statistics: Building bridges between typological linguistics and psychology [Commentary on Inferring universals from grammatical variation: Multidimensional scaling for typological analysis by William Croft and Keith T. Poole]. Theoretical Linguistics, 34(1), 59-66. doi:10.1515/THLI.2008.005.
  • Majid, A., & Huettig, F. (2008). A crosslinguistic perspective on semantic cognition [commentary on Precis of Semantic cognition: A parallel distributed approach by Timothy T. Rogers and James L. McClelland]. Behavioral and Brain Sciences, 31(6), 720-721. doi:10.1017/S0140525X08005967.

    Abstract

    Coherent covariation appears to be a powerful explanatory factor accounting for a range of phenomena in semantic cognition. But its role in accounting for the crosslinguistic facts is less clear. Variation in naming, within the same semantic domain, raises vexing questions about the necessary parameters needed to account for the basic facts underlying categorization.
  • Majid, A., & Levinson, S. C. (2008). Language does provide support for basic tastes [Commentary on A study of the science of taste: On the origins and influence of the core ideas by Robert P. Erickson]. Behavioral and Brain Sciences, 31, 86-87. doi:10.1017/S0140525X08003476.

    Abstract

    Recurrent lexicalization patterns across widely different cultural contexts can provide a window onto common conceptualizations. The cross-linguistic data support the idea that sweet, salt, sour, and bitter are basic tastes. In addition, umami and fatty are likely basic tastes, as well.
  • Majid, A. (Ed.). (2008). Field manual volume 11. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A. (2008). Focal colours. In A. Majid (Ed.), Field Manual Volume 11 (pp. 8-10). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492958.

    Abstract

    In this task we aim to find what the best exemplars or “focal colours” of each basic colour term is in our field languages. This is an important part of the evidence we need in order to understand the colour data collected using 'The Language of Vision I: Colour'. This task consists of an experiment where participants pick out the best exemplar for the colour terms in their language. The goal is to establish language specific focal colours.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The semantics of reciprocal constructions across languages: An extensional approach. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 29-60). Amsterdam: Benjamins.

    Abstract

    How similar are reciprocal constructions in the semantic parameters they encode? We investigate this question by using an extensional approach, which examines similarity of meaning by examining how constructions are applied over a set of 64 videoclips depicting reciprocal events (Evans et al. 2004). We apply statistical modelling to descriptions from speakers of 20 languages elicited using the videoclips. We show that there are substantial differences in meaning between constructions of different languages.

    Files private

    Request files
  • Majid, A., & Levinson, S. C. (Eds.). (2011). The senses in language and culture [Special Issue]. The Senses & Society, 6(1).
  • Majid, A., & Levinson, S. C. (2011). The senses in language and culture. The Senses & Society, 6(1), 5-18. doi:10.2752/174589311X12893982233551.

    Abstract

    Multiple social science disciplines have converged on the senses in recent years, where formerly the domain of perception was the preserve of psychology. Linguistics, or Language, however, seems to have an ambivalent role in this undertaking. On the one hand, Language with a capital L (language as a general human capacity) is part of the problem. It was the prior focus on language (text) that led to the disregard of the senses. On the other hand, it is language (with a small "l", a particular tongue) that offers key insights into how other peoples onceptualize the senses. In this article, we argue that a systematic cross-cultural approach can reveal fundamental truths about the precise connections between language and the senses. Recurring failures to adequately describe the sensorium across specific languages reveal the intrinsic limits of Language. But the converse does not hold. Failures of expressibility in one language need not hold any implications for the Language faculty per se, and indeed can enlighten us about the possible experiential worlds available to human experience.
  • Majid, A., Evans, N., Gaby, A., & Levinson, S. C. (2011). The grammar of exchange: A comparative study of reciprocal constructions across languages. Frontiers in Psychology, 2: 34, pp. 34. doi:10.3389/fpsyg.2011.00034.

    Abstract

    Cultures are built on social exchange. Most languages have dedicated grammatical machinery for expressing this. To demonstrate that statistical methods can also be applied to grammatical meaning, we here ask whether the underlying meanings of these grammatical constructions are based on shared common concepts. To explore this, we designed video stimuli of reciprocated actions (e.g. ‘giving to each other’) and symmetrical states (e.g. ‘sitting next to each other’), and with the help of a team of linguists collected responses from 20 languages around the world. Statistical analyses revealed that many languages do, in fact, share a common conceptual core for reciprocal meanings but that this is not a universally expressed concept. The recurrent pattern of conceptual packaging found across languages is compatible with the view that there is a shared non-linguistic understanding of reciprocation. But, nevertheless, there are considerable differences between languages in the exact extensional patterns, highlighting that even in the domain of grammar semantics is highly language-specific.
  • Majid, A., & Levinson, S. C. (2011). The language of perception across cultures [Abstract]. Abstracts of the XXth Congress of European Chemoreception Research Organization, ECRO-2010. Publ. in Chemical Senses, 36(1), E7-E8.

    Abstract

    How are the senses structured by the languages we speak, the cultures we inhabit? To what extent is the encoding of perceptual experiences in languages a matter of how the mind/brain is ―wired-up‖ and to what extent is it a question of local cultural preoccupation? The ―Language of Perception‖ project tests the hypothesis that some perceptual domains may be more ―ineffable‖ – i.e. difficult or impossible to put into words – than others. While cognitive scientists have assumed that proximate senses (olfaction, taste, touch) are more ineffable than distal senses (vision, hearing), anthropologists have illustrated the exquisite variation and elaboration the senses achieve in different cultural milieus. The project is designed to test whether the proximate senses are universally ineffable – suggesting an architectural constraint on cognition – or whether they are just accidentally so in Indo-European languages, so expanding the role of cultural interests and preoccupations. To address this question, a standardized set of stimuli of color patches, geometric shapes, simple sounds, tactile textures, smells and tastes have been used to elicit descriptions from speakers of more than twenty languages—including three sign languages. The languages are typologically, genetically and geographically diverse, representing a wide-range of cultures. The communities sampled vary in subsistence modes (hunter-gatherer to industrial), ecological zones (rainforest jungle to desert), dwelling types (rural and urban), and various other parameters. We examine how codable the different sensory modalities are by comparing how consistent speakers are in how they describe the materials in each modality. Our current analyses suggest that taste may, in fact, be the most codable sensorial domain across languages. Moreover, we have identified exquisite elaboration in the olfactory domains in some cultural settings, contrary to some contemporary predictions within the cognitive sciences. These results suggest that differential codability may be at least partly the result of cultural preoccupation. This shows that the senses are not just physiological phenomena but are constructed through linguistic, cultural and social practices.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2008). Discourse structure and relative clause processing. Memory & Cognition, 36(1), 170-181. doi:10.3758/MC.36.1.170.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19(3), 232-240. doi:10.1111/j.1467-9280.2008.02074.x.

    Abstract

    What drives humans around the world to converge in certain ways in their naming while diverging dramatically in others? We studied how naming patterns are constrained by investigating whether labeling of human locomotion reflects the biomechanical discontinuity between walking and running gaits. Similarity judgments of a student locomoting on a treadmill at different slopes and speeds revealed perception of this discontinuity. Naming judgments of the same clips by speakers of English, Japanese, Spanish, and Dutch showed lexical distinctions between walking and running consistent with the perceived discontinuity. Typicality judgments showed that major gait terms of the four languages share goodness-of-example gradients. These data demonstrate that naming reflects the biomechanical discontinuity between walking and running and that shared elements of naming can arise from correlations among stimulus properties that are dynamic and fleeting. The results support the proposal that converging naming patterns reflect structure in the world, not only acts of construction by observers.
  • Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 519-524). Austin, TX: Cognitive Science Society.

    Abstract

    To study concepts, cognitive scientists must first identify some. The prevailing assumption is that they are revealed by words such as triangle, table, and robin. But languages vary dramatically in how they carve up the world by name. Either ordinary concepts must be heavily language-dependent or names cannot be a direct route to concepts. We asked English, Dutch, Spanish, and Japanese speakers to name videos of human locomotion and judge their similarities. We investigated what name inventories and scaling solutions on name similarity and on physical similarity for the groups individually and together suggest about the underlying concepts. Aggregated naming and similarity solutions converged on results distinct from the answers suggested by the word inventories and scaling solutions of any single language. Words such as triangle, table, and robin can help identify the conceptual space of a domain, but they do not directly reveal units of knowledge usefully considered 'concepts'.
  • Marcus, G., & Fisher, S. E. (2011). Genes and language. In P. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 341-344). New York: Cambridge University Press.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (2011). Landscape in language: An introduction. In D. M. Mark, A. G. Turk, N. Burenhult, & D. Stea (Eds.), Landscape in language: Transdisciplinary perspectives (pp. 1-24). Amsterdam: John Benjamins.
  • Mark, D. M., Turk, A., Burenhult, N., & Stea, D. (Eds.). (2011). Landscape in language: Transdisciplinary perspectives. Amsterdam: John Benjamins.

    Abstract

    Landscape is fundamental to human experience. Yet until recently, the study of landscape has been fragmented among the disciplines. This volume focuses on how landscape is represented in language and thought, and what this reveals about the relationships of people to place and to land. Scientists of various disciplines such as anthropologists, geographers, information scientists, linguists, and philosophers address several questions, including: Are there cross-cultural and cross-linguistic variations in the delimitation, classification, and naming of geographic features? Can alternative world-views and conceptualizations of landscape be used to produce culturally-appropriate Geographic Information Systems (GIS)? Topics included ontology of landscape; landscape terms and concepts; toponyms; spiritual aspects of land and landscape terms; research methods; ethical dimensions of the research; and its potential value to indigenous communities involved in this type of research.
  • de Marneffe, M.-C., Tomlinson, J. J., Tice, M., & Sumner, M. (2011). The interaction of lexical frequency and phonetic variation in the perception of accented speech. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 3575-3580). Austin, TX: Cognitive Science Society.

    Abstract

    How listeners understand spoken words despite massive variation in the speech signal is a central issue for linguistic theory. A recent focus on lexical frequency and specificity has proved fruitful in accounting for this phenomenon. Speech perception, though, is a multi-faceted process and likely incorporates a number of mechanisms to map a variable signal to meaning. We examine a well-established language use factor — lexical frequency — and how this factor is integrated with phonetic variability during the perception of accented speech. We show that an integrated perspective highlights a low-level perceptual mechanism that accounts for the perception of accented speech absent native contrasts, while shedding light on the use of interactive language factors in the perception of spoken words.
  • Martin, A. E., & McElree, B. (2008). A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3), 879-906. doi:10.1016/j.jml.2007.06.010.

    Abstract

    Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed–accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3–5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
  • Martin, A. E., & McElree, B. (2011). Direct-access retrieval during sentence comprehension: Evidence from Sluicing. Journal of Memory and Language, 64(4), 327-343. doi:10.1016/j.jml.2010.12.006.

    Abstract

    Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ ( Martin and McElree, 2008 and Martin and McElree, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb–phrase ellipsis. We present speed–accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables.
  • Matthews, L. J., Tehrani, J. J., Jordan, F., Collard, M., & Nunn, C. (2011). Testing for divergent transmission histories among cultural characters: A study using Bayesian phylogenetic methods and Iranian tribal textile data. Plos One, 6(4), e14810. doi:10.1371/journal.pone.0014810.

    Abstract

    Abstract Background: Archaeologists and anthropologists have long recognized that different cultural complexes may have distinct descent histories, but they have lacked analytical techniques capable of easily identifying such incongruence. Here, we show how Bayesian phylogenetic analysis can be used to identify incongruent cultural histories. We employ the approach to investigate Iranian tribal textile traditions. Methods: We used Bayes factor comparisons in a phylogenetic framework to test two models of cultural evolution: the hierarchically integrated system hypothesis and the multiple coherent units hypothesis. In the hierarchically integrated system hypothesis, a core tradition of characters evolves through descent with modification and characters peripheral to the core are exchanged among contemporaneous populations. In the multiple coherent units hypothesis, a core tradition does not exist. Rather, there are several cultural units consisting of sets of characters that have different histories of descent. Results: For the Iranian textiles, the Bayesian phylogenetic analyses supported the multiple coherent units hypothesis over the hierarchically integrated system hypothesis. Our analyses suggest that pile-weave designs represent a distinct cultural unit that has a different phylogenetic history compared to other textile characters. Conclusions: The results from the Iranian textiles are consistent with the available ethnographic evidence, which suggests that the commercial rug market has influenced pile-rug designs but not the techniques or designs incorporated in the other textiles produced by the tribes. We anticipate that Bayesian phylogenetic tests for inferring cultural units will be of great value for researchers interested in studying the evolution of cultural traits including language, behavior, and material culture.
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McGettigan, C., Warren, J. E., Eisner, F., Marshall, C. R., Shanmugalingam, P., & Scott, S. K. (2011). Neural correlates of sublexical processing in phonological working memory. Journal of Cognitive Neuroscience, 23, 961-977. doi:10.1162/jocn.2010.21491.

    Abstract

    This study investigated links between working memory and speech processing systems. We used delayed pseudoword repetition in fMRI to investigate the neural correlates of sublexical structure in phonological working memory (pWM). We orthogonally varied the number of syllables and consonant clusters in auditory pseudowords and measured the neural responses to these manipulations under conditions of covert rehearsal (Experiment 1). A left-dominant network of temporal and motor cortex showed increased activity for longer items, with motor cortex only showing greater activity concomitant with adding consonant clusters. An individual-differences analysis revealed a significant positive relationship between activity in the angular gyrus and the hippocampus, and accuracy on pseudoword repetition. As models of pWM stipulate that its neural correlates should be activated during both perception and production/rehearsal [Buchsbaum, B. R., & D'Esposito, M. The search for the phonological store: From loop to convolution. Journal of Cognitive Neuroscience, 20, 762-778, 2008; Jacquemot, C., & Scott, S. K. What is the relationship between phonological short-term memory and speech processing? Trends in Cognitive Sciences, 10, 480-486, 2006; Baddeley, A. D., & Hitch, G. Working memory. In G. H. Bower (Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 8, pp. 47-89). New York: Academic Press, 1974], we further assessed the effects of the two factors in a separate passive listening experiment (Experiment 2). In this experiment, the effect of the number of syllables was concentrated in posterior-medial regions of the supratemporal plane bilaterally, although there was no evidence of a significant response to added clusters. Taken together, the results identify the planum temporale as a key region in pWM; within this region, representations are likely to take the form of auditory or audiomotor -templates- or -chunks- at the level of the syllable [Papoutsi, M., de Zwart, J. A., Jansma, J. M., Pickering, M. J., Bednar, J. A., & Horwitz, B. From phonemes to articulatory codes: an fMRI study of the role of Broca's area in speech production. Cerebral Cortex, 19, 2156-2165, 2009; Warren, J. E., Wise, R. J. S., & Warren, J. D. Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends in Neurosciences, 28, 636-643, 2005; Griffiths, T. D., & Warren, J. D. The planum temporale as a computational hub. Trends in Neurosciences, 25, 348-353, 2002], whereas more lateral structures on the STG may deal with phonetic analysis of the auditory input [Hickok, G. The functional neuroanatomy of language. Physics of Life Reviews, 6, 121-143, 2009].
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Menenti, L., Gierhan, S., Segaert, K., & Hagoort, P. (2011). Shared language: Overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22, 1173-1182. doi:10.1177/0956797611418347.

    Abstract

    Whether the brain’s speech-production system is also involved in speech comprehension is a topic of much debate. Research has focused on whether motor areas are involved in listening, but overlap between speaking and listening might occur not only at primary sensory and motor levels, but also at linguistic levels (where semantic, lexical, and syntactic processes occur). Using functional MRI adaptation during speech comprehension and production, we found that the brain areas involved in semantic, lexical, and syntactic processing are mostly the same for speaking and for listening. Effects of primary processing load (indicative of sensory and motor processes) overlapped in auditory cortex and left inferior frontal cortex, but not in motor cortex, where processing load affected activity only in speaking. These results indicate that the linguistic parts of the language system are used for both speaking and listening, but that the motor system does not seem to provide a crucial contribution to listening.
  • Mester, J. L., Tilot, A. K., Rybicki, L. A., Frazier, T. W., & Eng, C. (2011). Analysis of prevalence and degree of macrocephaly in patients with germline PTEN mutations and of brain weight in Pten knock-in murine model. European Journal of Human Genetics, 19(7), 763-768. doi:10.1038/ejhg.2011.20.

    Abstract

    PTEN Hamartoma Tumour Syndrome (PHTS) includes Cowden syndrome (CS), Bannayan-Riley-Ruvalcaba syndrome (BRRS), and other conditions resulting from germline mutation of the PTEN tumour suppressor gene. Although macrocephaly, presumably due to megencephaly, is found in both CS and BRRS, the prevalence and degree have not been formally assessed in PHTS. We evaluated head size in a prospective nested series of 181 patients found to have pathogenic germline PTEN mutations. Clinical data including occipital-frontal circumference (OFC) measurement were requested for all participants. Macrocephaly was present in 94% of 161 evaluable PHTS individuals. In patients ≤18 years, mean OFC was +4.89 standard deviations (SD) above the population mean with no difference between genders (P=0.7). Among patients >18 years, average OFC was 60.0 cm in females and 62.8 cm in males (P<0.0001). To systematically determine whether macrocephaly was due to megencephaly, we examined PtenM3M4 missense mutant mice generated and maintained on mixed backgrounds. Mice were killed at various ages, brains were dissected out and weighed. Average brain weight for PtenM3M4 homozygous mice (N=15) was 1.02 g compared with 0.57 g for heterozygous mice (N=29) and 0.49 g for wild-type littermates (N=24) (P<0.0001). Macrocephaly, secondary to megencephaly, is an important component of PHTS and more prevalent than previously appreciated. Patients with PHTS have increased risks for breast and thyroid cancers, and early diagnosis is key to initiating timely screening to reduce patient morbidity and mortality. Clinicians should consider germline PTEN testing at an early point in the diagnostic work-up for patients with extreme macrocephaly.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Minagawa-Kawai, Y., Cristia, A., Vendelin, I., Cabrol, D., & Dupoux, E. (2011). Assessing signal-driven mechanisms in neonates: Brain responses to temporally and spectrally different sounds. Frontiers in Psychology, 2, 135. doi:10.3389/fpsyg.2011.00135.

    Abstract

    Past studies have found that, in adults, the acoustic properties of sound signals (such as fast versus slow temporal features) differentially activate the left and right hemispheres, and some have hypothesized that left-lateralization for speech processing may follow from left-lateralization to rapidly changing signals. Here, we tested whether newborns’ brains show some evidence of signal-specific lateralization responses using near-infrared spectroscopy (NIRS) and auditory stimuli that elicits lateralized responses in adults, composed of segments that vary in duration and spectral diversity. We found significantly greater bilateral responses of oxygenated hemoglobin (oxy-Hb) in the temporal areas for stimuli with a minimum segment duration of 21 ms, than stimuli with a minimum segment duration of 667 ms. However, we found no evidence for hemispheric asymmetries dependent on the stimulus characteristics. We hypothesize that acoustic-based functional brain asymmetries may develop throughout early infancy, and discuss their possible relationship with brain asymmetries for language.
  • Minagawa-Kawai, Y., Cristia, A., & Dupoux, E. (2011). Cerebral lateralization and early speech acquisition: A developmental scenario. Developmental Cognitive Neuroscience, 1, 217-232. doi:10.1016/j.dcn.2011.03.005.

    Abstract

    During the past ten years, research using Near-infrared Spectroscopy (NIRS) to study the developing brain has provided groundbreaking evidence of brain functions in infants. This paper presents a theoretically oriented review of this wealth of evidence, summarizing recent NIRS data on language processing, without neglecting other neuroimaging or behavioral studies in infancy and adulthood. We review three competing classes of hypotheses (i.e. signal-driven, domain-driven, and learning biases hypotheses) regarding the causes of hemispheric specialization for speech processing. We assess the fit between each of these hypotheses and neuroimaging evidence in speech perception and show that none of the three hypotheses can account for the entire set of observations on its own. However, we argue that they provide a good fit when combined within a developmental perspective. According to our proposed scenario, lateralization for language emerges out of the interaction between pre-existing left–right biases in generic auditory processing (signal-driven hypothesis), and a left-hemisphere predominance of particular learning mechanisms (learning-biases hypothesis). As a result of this completed developmental process, the native language is represented in the left hemisphere predominantly. The integrated scenario enables to link infant and adult data, and points to many empirical avenues that need to be explored more systematically.
  • Mitterer, H., & De Ruiter, J. P. (2008). Recalibrating color categories using world knowledge. Psychological Science, 19(7), 629-634. doi:10.1111/j.1467-9280.2008.02133.x.

    Abstract

    When the perceptual system uses color to facilitate object recognition, it must solve the color-constancy problem: The light an object reflects to an observer's eyes confounds properties of the source of the illumination with the surface reflectance of the object. Information from the visual scene (bottom-up information) is insufficient to solve this problem. We show that observers use world knowledge about objects and their prototypical colors as a source of top-down information to improve color constancy. Specifically, observers use world knowledge to recalibrate their color categories. Our results also suggest that similar effects previously observed in language perception are the consequence of a general perceptual process.
  • Mitterer, H., & Ernestus, M. (2008). The link between speech perception and production is phonological and abstract: Evidence from the shadowing task. Cognition, 109(1), 168-173. doi:10.1016/j.cognition.2008.08.002.

    Abstract

    This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.
  • Mitterer, H., Yoneyama, K., & Ernestus, M. (2008). How we hear what is hardly there: Mechanisms underlying compensation for /t/-reduction in speech comprehension. Journal of Memory and Language, 59, 133-152. doi:10.1016/j.jml.2008.02.004.

    Abstract

    In four experiments, we investigated how listeners compensate for reduced /t/ in Dutch. Mitterer and Ernestus [Mitterer,H., & Ernestus, M. (2006). Listeners recover /t/s that speakers lenite: evidence from /t/-lenition in Dutch. Journal of Phonetics, 34, 73–103] showed that listeners are biased to perceive a /t/ more easily after /s/ than after /n/, compensating for the tendency of speakers to reduce word-final /t/ after /s/ in spontaneous conversations. We tested the robustness of this phonological context effect in perception with three very different experimental tasks: an identification task, a discrimination task with native listeners and with non-native listeners who do not have any experience with /t/-reduction,and a passive listening task (using electrophysiological dependent measures). The context effect was generally robust against these experimental manipulations, although we also observed some deviations from the overall pattern. Our combined results show that the context effect in compensation for reduced /t/ results from a complex process involving auditory constraints, phonological learning, and lexical constraints.
  • Mitterer, H. (2008). How are words reduced in spontaneous speech? In A. Botonis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (pp. 165-168). Athens: University of Athens.

    Abstract

    Words are reduced in spontaneous speech. If reductions are constrained by functional (i.e., perception and production) constraints, they should not be arbitrary. This hypothesis was tested by examing the pronunciations of high- to mid-frequency words in a Dutch and a German spontaneous speech corpus. In logistic-regression models the "reduction likelihood" of a phoneme was predicted by fixed-effect predictors such as position within the word, word length, word frequency, and stress, as well as random effects such as phoneme identity and word. The models for Dutch and German show many communalities. This is in line with the assumption that similar functional constraints influence reductions in both languages.
  • Mitterer, H., Chen, Y., & Zhou, X. (2011). Phonological abstraction in processing lexical-tone variation: Evidence from a learning paradigm. Cognitive Science, 35, 184-197. doi:10.1111/j.1551-6709.2010.01140.x.

    Abstract

    There is a growing consensus that the mental lexicon contains both abstract and word-specific acoustic information. To investigate their relative importance for word recognition, we tested to what extent perceptual learning is word specific or generalizable to other words. In an exposure phase, participants were divided into two groups; each group was semantically biased to interpret an ambiguous Mandarin tone contour as either tone1 or tone2. In a subsequent test phase, the perception of ambiguous contours was dependent on the exposure phase: Participants who heard ambiguous contours as tone1 during exposure were more likely to perceive ambiguous contours as tone1 than participants who heard ambiguous contours as tone2 during exposure. This learning effect was only slightly larger for previously encountered than for not previously encountered words. The results speak for an architecture with prelexical analysis of phonological categories to achieve both lexical access and episodic storage of exemplars.
  • Mitterer, H. (2011). Recognizing reduced forms: Different processing mechanisms for similar reductions. Journal of Phonetics, 39, 298-303. doi:10.1016/j.wocn.2010.11.009.

    Abstract

    Recognizing phonetically reduced forms is a huge challenge for spoken-word recognition. Phonetic reductions not only occur often, but also come in a variety of forms. The paper investigates how two similar forms of reductions – /t/-reduction and nasal place assimilation in Dutch – can eventually be recognized, focusing on the role of following phonological context. Previous research indicated that listeners take the following phonological context into account when compensating for /t/-reduction and nasal place assimilation. The current paper shows that these context effects arise in early perceptual processes for the perception of assimilated forms, but at a later stage of processing for the perception of /t/-reduced forms. This shows first that the recognition of apparently similarly reduced words may rely on different processing mechanisms and, second, that searching for dissociations over tasks is a promising research strategy to investigate how reduced forms are recognized.
  • Mitterer, H. (2011). Social accountability influences phonetic alignment. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2442.

    Abstract

    Speakers tend to take over the articulatory habits of their interlocutors [e.g., Pardo, JASA (2006)]. This phonetic alignment could be the consequence of either a social mechanism or a direct and automatic link between speech perception and production. The latter assumes that social variables should have little influence on phonetic alignment. To test that participants were engaged in a "cloze task" (i.e., Stimulus: "In fantasy movies, silver bullets are used to kill ..." Response: "werewolves") with either one or four interlocutors. Given findings with the Asch-conformity paradigm in social psychology, multiple consistent speakers should exert a stronger force on the participant to align. To control the speech style of the interlocutors, their questions and answers were pre-recorded in either a formal or a casual speech style. The stimuli's speech style was then manipulated between participants and was consistent throughout the experiment for a given participant. Surprisingly, participants aligned less with the speech style if there were multiple interlocutors. This may reflect a "diffusion of responsibility:" Participants may find it more important to align when they interact with only one person than with a larger group.
  • Mitterer, H. (2011). The mental lexicon is fully specified: Evidence from eye-tracking. Journal of Experimental Psychology: Human Perception and Performance, 37(2), 496-513. doi:10.1037/a0020989.

    Abstract

    Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input ("pin") activates lexical entries with underspecified coronal stops ('tin'), but lexical entries with specified labial stops ('pin') are not activated by mismatching input ("tin"). The eye-tracking data failed to show such a pattern. Although words that were phonologically similar to the spoken target attracted more looks than unrelated distractors, this effect was symmetric in Experiment 1 with minimal pairs ("tin"- "pin") and in Experiments 2 and 3 with words with an onset overlap ("peacock" - "teacake"). Experiment 4 revealed that /t/-initial words were looked at more frequently if the spoken input mismatched only in terms of place than if it mismatched in place and voice, contrary to the assumption that /t/ is unspecified for place and voice. These results show that speech perception uses signal-driven information to the fullest, as predicted by an optimal perception account.
  • Morgan, J. L., Van Elswijk, G., & Meyer, A. S. (2008). Extrafoveal processing of objects in a naming task: Evidence from word probe experiments. Psychonomic Bulletin & Review, 15, 561-565. doi:10.3758/PBR.15.3.561.

    Abstract

    In two experiments, we investigated the processing of extrafoveal objects in a double-object naming task. On most trials, participants named two objects; but on some trials, the objects were replaced shortly after trial onset by a written word probe, which participants had to name instead of the objects. In Experiment 1, the word was presented in the same location as the left object either 150 or 350 msec after trial onset and was either phonologically related or unrelated to that object name. Phonological facilitation was observed at the later but not at the earlier SOA. In Experiment 2, the word was either phonologically related or unrelated to the right object and was presented 150 msec after the speaker had begun to inspect that object. In contrast with Experiment 1, phonological facilitation was found at this early SOA, demonstrating that the speakers had begun to process the right object prior to fixation.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2008). Speech planning during multiple-object naming: Effects of ageing. Quarterly Journal of Experimental Psychology, 61, 1217 -1238. doi:10.1080/17470210701467912.

    Abstract

    Two experiments were conducted with younger and older speakers. In Experiment 1, participants named single objects that were intact or visually degraded, while hearing distractor words that were phonologically related or unrelated to the object name. In both younger and older participants naming latencies were shorter for intact than for degraded objects and shorter when related than when unrelated distractors were presented. In Experiment 2, the single objects were replaced by object triplets, with the distractors being phonologically related to the first object's name. Naming latencies and gaze durations for the first object showed degradation and relatedness effects that were similar to those in single-object naming. Older participants were slower than younger participants when naming single objects and slower and less fluent on the second but not the first object when naming object triplets. The results of these experiments indicate that both younger and older speakers plan object names sequentially, but that older speakers use this planning strategy less efficiently.
  • Mulder, K., & Hulstijn, J. H. (2011). Linguistic skills of adult native speakers, as a function of age and level of education. Applied Linguistics, 32, 475-494. doi:10.1093/applin/amr016.

    Abstract

    This study assessed, in a sample of 98 adult native speakers of Dutch, how their lexical skills and their speaking proficiency varied as a function of their age and level of education and profession (EP). Participants, categorized in terms of their age (18–35, 36–50, and 51–76 years old) and the level of their EP (low versus high), were tested on their lexical knowledge, lexical fluency, and lexical memory, and they performed four speaking tasks, differing in genre and formality. Speaking performance was rated in terms of communicative adequacy and in terms of number of words, number of T-units, words per T-unit, content words per T-unit, hesitations per T-unit, and grammatical errors per T-unit. Increasing age affected lexical knowledge positively but lexical fluency and memory negatively. High EP positively affected lexical knowledge and memory but EP did not affect lexical fluency. Communicative adequacy of the responses in the speaking tasks was positively affected by high EP but was not affected by age. It is concluded that, given the large variability in native speakers’ language knowledge and skills, studies investigating the question of whether second-language learners can reach native levels of proficiency, should take native-speaker variability into account.

    Additional information

    Mulder_2011_Supplementary Data.doc
  • Munafò, M. R., Freathy, R. M., Ring, S. M., St Pourcain, B., & Smith, G. D. (2011). Association of COMT Val108/158Met Genotype and Cigarette Smoking in Pregnant Women. Nicotine & Tobacco Research, 13(2), 55-63. doi:10.1093/ntr/ntq209.

    Abstract

    INTRODUCTION: Smoking behaviors, including heaviness of smoking and smoking cessation, are known to be under a degree of genetic influence. The enzyme catechol O-methyltransferase (COMT) is of relevance in studies of smoking behavior and smoking cessation due to its presence in dopaminergic brain regions. While the COMT gene is therefore one of the more promising candidate genes for smoking behavior, some inconsistencies have begun to emerge. METHODS: We explored whether the rs4680 A (Met) allele of the COMT gene predicts increased heaviness of smoking and reduced likelihood of smoking cessation in a large population-based cohort of pregnant women. We further conducted a meta-analysis of published data from community samples investigating the association of this polymorphism with heaviness of smoking and smoking status. RESULTS: In our primary sample, the A (Met) allele was associated with increased heaviness of smoking before pregnancy but not with the odds of continuing to smoke in pregnancy either in the first trimester or in the third trimester. Meta-analysis also indicated modest evidence of association of the A (Met) allele with increased heaviness of smoking but not with persistent smoking. CONCLUSIONS: Our data suggest a weak association between COMT genotype and heaviness of smoking, which is supported by our meta-analysis. However, it should be noted that the strength of evidence for this association was modest. Neither our primary data nor our meta-analysis support an association between COMT genotype and smoking cessation. Therefore, COMT remains a plausible candidate gene for smoking behavior phenotypes, in particular, heaviness of smoking.
  • Narasimhan, B., & Dimroth, C. (2008). Word order and information status in child language. Cognition, 107, 317-329. doi:10.1016/j.cognition.2007.07.010.

    Abstract

    In expressing rich, multi-dimensional thought in language, speakers are influenced by a range of factors that influence the ordering of utterance constituents. A fundamental principle that guides constituent ordering in adults has to do with information status, the accessibility of referents in discourse. Typically, adults order previously mentioned referents (“old” or accessible information) first, before they introduce referents that have not yet been mentioned in the discourse (“new” or inaccessible information) at both sentential and phrasal levels. Here we ask whether a similar principle influences ordering patterns at the phrasal level in children who are in the early stages of combining words productively. Prior research shows that when conveying semantic relations, children reproduce language-specific ordering patterns in the input, suggesting that they do not have a bias for any particular order to describe “who did what to whom”. But our findings show that when they label “old” versus “new” referents, 3- to 5-year-old children prefer an ordering pattern opposite to that of adults (Study 1). Children’s ordering preference is not derived from input patterns, as “old-before-new” is also the preferred order in caregivers’ speech directed to young children (Study 2). Our findings demonstrate that a key principle governing ordering preferences in adults does not originate in early childhood, but develops: from new-to-old to old-to-new.
  • Narasimhan, B., & Gullberg, M. (2011). The role of input frequency and semantic transparency in the acquisition of verb meaning: Evidence from placement verbs in Tamil and Dutch. Journal of Child Language, 38, 504-532. doi:10.1017/S0305000910000164.

    Abstract

    We investigate how Tamil- and Dutch-speaking adults and 4- to 5-year-old children use caused posture verbs (‘lay/stand a bottle on a table’) to label placement events in which objects are oriented vertically or horizontally. Tamil caused posture verbs consist of morphemes that individually label the causal and result subevents (nikka veyyii ‘make stand’; paDka veyyii ‘make lie’), occurring in situational and discourse contexts where object orientation is at issue. Dutch caused posture verbs are less semantically transparent: they are monomorphemic (zetten ‘set/stand’; leggen ‘lay’), often occurring in contexts where factors other than object orientation determine use. Caused posture verbs occur rarely in corpora of Tamil input, whereas in Dutch input, they are used frequently. Elicited production data reveal that Tamil four-year-olds use infrequent placement verbs appropriately whereas Dutch children use high-frequency placement verbs inappropriately even at age five. Semantic transparency exerts a stronger influence than input frequency in constraining children’s verb meaning acquisition.
  • Need, A. C., Attix, D. K., McEvoy, J. M., Cirulli, E. T., Linney, K. N., Wagoner, A. P., Gumbs, C. E., Giegling, I., Möller, H.-J., Francks, C., Muglia, P., Roses, A., Gibson, G., Weale, M. E., Rujescu, D., & Goldstein, D. B. (2008). Failure to replicate effect of Kibra on human memory in two large cohorts of European origin. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147B, 667-668. doi:10.1002/ajmg.b.30658.

    Abstract

    It was recently suggested that the Kibra polymorphism rs17070145 has a strong effect on multiple episodic memory tasks in humans. We attempted to replicate this using two cohorts of European genetic origin (n = 319 and n = 365). We found no association with either the original SNP or a set of tagging SNPs in the Kibra gene with multiple verbal memory tasks, including one that was an exact replication (Auditory Verbal Learning Task, AVLT). These results suggest that Kibra does not have a strong and general effect on human memory.

    Additional information

    SupplementaryMethodsIAmJMedGen.doc
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The neurocognition of referential ambiguity in language comprehension. Language and Linguistics Compass, 2(4), 603-630. doi:10.1111/j.1749-818x.2008.00070.x.

    Abstract

    Referential ambiguity arises whenever readers or listeners are unable to select a unique referent for a linguistic expression out of multiple candidates. In the current article, we review a series of neurocognitive experiments from our laboratory that examine the neural correlates of referential ambiguity, and that employ the brain signature of referential ambiguity to derive functional properties of the language comprehension system. The results of our experiments converge to show that referential ambiguity resolution involves making an inference to evaluate the referential candidates. These inferences only take place when both referential candidates are, at least initially, equally plausible antecedents. Whether comprehenders make these anaphoric inferences is strongly context dependent and co-determined by characteristics of the reader. In addition, readers appear to disregard referential ambiguity when the competing candidates are each semantically incoherent, suggesting that, under certain circumstances, semantic analysis can proceed even when referential analysis has not yielded a unique antecedent. Finally, results from a functional neuroimaging study suggest that whereas the neural systems that deal with referential ambiguity partially overlap with those that deal with referential failure, they show an inverse coupling with the neural systems associated with semantic processing, possibly reflecting the relative contributions of semantic and episodic processing to re-establish semantic and referential coherence, respectively.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2008). The interplay between semantic and referential aspects of anaphoric noun phrase resolution: Evidence from ERPs. Brain & Language, 106, 119-131. doi:10.1016/j.bandl.2008.05.001.

    Abstract

    In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.
  • Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth Is not too hard to handle. An event-related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213-1218. doi:10.1111/j.1467-9280.2008.02226.x.

    Abstract

    Our brains rapidly map incoming language onto what we hold to be true. Yet there are claims that such integration and verification processes are delayed in sentences containing negation words like not. However, studies have often confounded whether a statement is true and whether it is a natural thing to say during normal communication. In an event-related potential (ERP) experiment, we aimed to disentangle effects of truth value and pragmatic licensing on the comprehension of affirmative and negated real-world statements. As in affirmative sentences, false words elicited a larger N400 ERP than did true words in pragmatically licensed negated sentences (e.g., “In moderation, drinking red wine isn't bad/good…”), whereas true and false words elicited similar responses in unlicensed negated sentences (e.g., “A baby bunny's fur isn't very hard/soft…”). These results suggest that negation poses no principled obstacle for readers to immediately relate incoming words to what they hold to be true.
  • Nobe, S., Furuyama, N., Someya, Y., Sekine, K., Suzuki, M., & Hayashi, K. (2008). A longitudinal study on gesture of simultaneous interpreter. The Japanese Journal of Speech Sciences, 8, 63-83.
  • Noble, C. H., Rowland, C. F., & Pine, J. M. (2011). Comprehension of argument structure and semantic roles: Evidence from English-learning children and the forced-choice pointing paradigm. Cognitive Science, 35(5), 963-982. doi:10.1111/j.1551-6709.2011.01175.x.

    Abstract

    Research using the intermodal preferential looking paradigm (IPLP) has consistently shown that English-learning children aged 2 can associate transitive argument structure with causal events. However, studies using the same methodology investigating 2-year-old children’s knowledge of the conjoined agent intransitive and semantic role assignment have reported inconsistent findings. The aim of the present study was to establish at what age English-learning children have verb-general knowledge of both transitive and intransitive argument structure using a new method: the forced-choice pointing paradigm. The results suggest that young 2-year-olds can associate transitive structures with causal (or externally caused) events and can use transitive structure to assign agent and patient roles correctly. However, the children were unable to associate the conjoined agent intransitive with noncausal events until aged 3;4. The results confirm the pattern from previous IPLP studies and indicate that children may develop the ability to comprehend different aspects of argument structure at different ages. The implications for theories of language acquisition and the nature of the language acquisition mechanism are discussed.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Noordman, L. G. M., & Vonk, W. (1998). Memory-based processing in understanding causal information. Discourse Processes, 191-212. doi:10.1080/01638539809545044.

    Abstract

    The reading process depends both on the text and on the reader. When we read a text, propositions in the current input are matched to propositions in the memory representation of the previous discourse but also to knowledge structures in long‐term memory. Therefore, memory‐based text processing refers both to the bottom‐up processing of the text and to the top‐down activation of the reader's knowledge. In this article, we focus on the role of cognitive structures in the reader's knowledge. We argue that causality is an important category in structuring human knowledge and that this property has consequences for text processing. Some research is discussed that illustrates that the more the information in the text reflects causal categories, the more easily the information is processed.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2011). The grammar of perception. In K. Kendrick, & A. Majid (Eds.), Field manual volume 14 (pp. 1-10). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Nordhoff, S., & Hammarström, H. (2011). Glottolog/Langdoc: Defining dialects, languages, and language families as collections of resources. Proceedings of the First International Workshop on Linked Science 2011 (LISC2011), Bonn, Germany, October 24, 2011.

    Abstract

    This paper describes the Glottolog/Langdoc project, an at- tempt to provide near-total bibliographical coverage of descriptive re- sources to the world's languages. Every reference is treated as a resource, as is every \languoid"[1]. References are linked to the languoids which they describe, and languoids are linked to the references described by them. Family relations between languoids are modeled in SKOS, as are relations across dierent classications of the same languages. This setup allows the representation of languoids as collections of references, render- ing the question of the denition of entities like `Scots', `West-Germanic' or `Indo-European' more empirical.
  • Norris, D., & McQueen, J. M. (2008). Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review, 115(2), 357-395. doi:10.1037/0033-295X.115.2.357.

    Abstract

    A Bayesian model of continuous speech recognition is presented. It is based on Shortlist ( D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Obleser, J., Eisner, F., & Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. Journal of Neuroscience, 28(32), 8116-8124. doi:doi:10.1523/JNEUROSCI.1290-08.2008.

    Abstract

    Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.
  • O'Brien, D. P., & Bowerman, M. (1998). Martin D. S. Braine (1926–1996): Obituary. American Psychologist, 53, 563. doi:10.1037/0003-066X.53.5.563.

    Abstract

    Memorializes Martin D. S. Braine, whose research on child language acquisition and on both child and adult thinking and reasoning had a major influence on modern cognitive psychology. Addressing meaning as well as position, Braine argued that children start acquiring language by learning narrow-scope positional formulas that map components of meaning to positions in the utterance. These proposals were critical in starting discussions of the possible universality of the pivot-grammar stage and of the role of syntax, semantics,and pragmatics in children's early grammar and were pivotal to the rise of approaches in which cognitive development in language acquisition is stressed.
  • Omar, R., Henley, S. M., Bartlett, J. W., Hailstone, J. C., Gordon, E., Sauter, D., Frost, C., Scott, S. K., & Warren, J. D. (2011). The structural neuroanatomy of music emotion recognition: Evidence from frontotemporal lobar degeneration. Neuroimage, 56, 1814-1821. doi:10.1016/j.neuroimage.2011.03.002.

    Abstract

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions.
  • Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J.-M. (2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and Invasive Electrophysiological Data. Computational Intelligence and Neuroscience, 2011: 156869, pp. 156869. doi:10.1155/2011/156869.

    Abstract

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.
  • O’Roak, B. J., Deriziotis, P., Lee, C., Vives, L., Schwartz, J. J., Girirajan, S., Karakoc, E., MacKenzie, A. P., Ng, S. B., Baker, C., Rieder, M. J., Nickerson, D. A., Bernier, R., Fisher, S. E., Shendure, J., & Eichler, E. E. (2011). Exome sequencing in sporadic autism spectrum disorders identifies severe de novo mutations. Nature Genetics, 43, 585-589. doi:10.1038/ng.835.

    Abstract

    Evidence for the etiology of autism spectrum disorders (ASDs) has consistently pointed to a strong genetic component complicated by substantial locus heterogeneity1, 2. We sequenced the exomes of 20 individuals with sporadic ASD (cases) and their parents, reasoning that these families would be enriched for de novo mutations of major effect. We identified 21 de novo mutations, 11 of which were protein altering. Protein-altering mutations were significantly enriched for changes at highly conserved residues. We identified potentially causative de novo events in 4 out of 20 probands, particularly among more severely affected individuals, in FOXP1, GRIN2B, SCN1A and LAMC3. In the FOXP1 mutation carrier, we also observed a rare inherited CNTNAP2 missense variant, and we provide functional support for a multi-hit model for disease risk3. Our results show that trio-based exome sequencing is a powerful approach for identifying new candidate genes for ASDs and suggest that de novo mutations may contribute substantially to the genetic etiology of ASDs.

    Additional information

    ORoak_Supplementary text.pdf

    Files private

    Request files
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Otten, M., & Van Berkum, J. J. A. (2008). Discourse-based word anticipation during language processing: Prediction of priming? Discourse Processes, 45, 464-496. doi:10.1080/01638530802356463.

    Abstract

    Language is an intrinsically open-ended system. This fact has led to the widely shared assumption that readers and listeners do not predict upcoming words, at least not in a way that goes beyond simple priming between words. Recent evidence, however, suggests that readers and listeners do anticipate upcoming words “on the fly” as a text unfolds. In 2 event-related potentials experiments, this study examined whether these predictions are based on the exact message conveyed by the prior discourse or on simpler word-based priming mechanisms. Participants read texts that strongly supported the prediction of a specific word, mixed with non-predictive control texts that contained the same prime words. In Experiment 1A, anomalous words that replaced a highly predictable (as opposed to a non-predictable but coherent) word elicited a long-lasting positive shift, suggesting that the prior discourse had indeed led people to predict specific words. In Experiment 1B, adjectives whose suffix mismatched the predictable noun's syntactic gender elicited a short-lived late negativity in predictive stories but not in prime control stories. Taken together, these findings reveal that the conceptual basis for predicting specific upcoming words during reading is the exact message conveyed by the discourse and not the mere presence of prime words.
  • Ottoni, C., Ricaut, F.-X., Vanderheyden, N., Brucato, N., Waelkens, M., & Decorte, R. (2011). Mitochondrial analysis of a Byzantine population reveals the differential impact of multiple historical events in South Anatolia. European Journal of Human Genetics, 19, 571-576. doi:10.1038/ejhg.2010.230.

    Abstract

    The archaeological site of Sagalassos is located in Southwest Turkey, in the western part of the Taurus mountain range. Human occupation of its territory is attested from the late 12th millennium BP up to the 13th century AD. By analysing the mtDNA variation in 85 skeletons from Sagalassos dated to the 11th–13th century AD, this study attempts to reconstruct the genetic signature potentially left in this region of Anatolia by the many civilizations, which succeeded one another over the centuries until the mid-Byzantine period (13th century BC). Authentic ancient DNA data were determined from the control region and some SNPs in the coding region of the mtDNA in 53 individuals. Comparative analyses with up to 157 modern populations allowed us to reconstruct the origin of the mid-Byzantine people still dwelling in dispersed hamlets in Sagalassos, and to detect the maternal contribution of their potential ancestors. By integrating the genetic data with historical and archaeological information, we were able to attest in Sagalassos a significant maternal genetic signature of Balkan/Greek populations, as well as ancient Persians and populations from the Italian peninsula. Some contribution from the Levant has been also detected, whereas no contribution from Central Asian population could be ascertained.
  • Ozturk, O., & Papafragou, A. (2008). Acquisition of evidentiality and source monitoring. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 368-377). Somerville, Mass.: Cascadilla Press.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., Kita, S., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2008). Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish. Developmental Psychology, 44(4), 1040-1054. doi:10.1037/0012-1649.44.4.1040.

    Abstract

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children’s gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Even though most studies of language have focused on speech channel and/or viewed language as an
    amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication.
    In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research,
    she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and
    comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several
    aspects of the language structure and in its processing.
    Studying language multimodally reveals how grounded language is in our visible bodily actions and opens
    up new lines of research to study language in its situated,
    natural face-to-face context.
  • Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (Eds.), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press.
  • Patel, A. D., Iversen, J. R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca's aphasia. Aphasiology, 22(7/8), 776-789. doi:10.1080/02687030701803804.

    Abstract

    Background: Growing evidence for overlap in the syntactic processing of language and music in non-brain-damaged individuals leads to the question of whether aphasic individuals with grammatical comprehension problems in language also have problems processing structural relations in music.

    Aims: The current study sought to test musical syntactic processing in individuals with Broca's aphasia and grammatical comprehension deficits, using both explicit and implicit tasks.

    Methods & Procedures: Two experiments were conducted. In the first experiment 12 individuals with Broca's aphasia (and 14 matched controls) were tested for their sensitivity to grammatical and semantic relations in sentences, and for their sensitivity to musical syntactic (harmonic) relations in chord sequences. An explicit task (acceptability judgement of novel sequences) was used. The second experiment, with 9 individuals with Broca's aphasia (and 12 matched controls), probed musical syntactic processing using an implicit task (harmonic priming).

    Outcomes & Results: In both experiments the aphasic group showed impaired processing of musical syntactic relations. Control experiments indicated that this could not be attributed to low-level problems with the perception of pitch patterns or with auditory short-term memory for tones.

    Conclusions: The results suggest that musical syntactic processing in agrammatic aphasia deserves systematic investigation, and that such studies could help probe the nature of the processing deficits underlying linguistic agrammatism. Methodological suggestions are offered for future work in this little-explored area.
  • Paternoster, L., Evans, D. M., Aagaard Nohr, E., Holst, C., Gaborieau, V., Brennan, P., Prior Gjesing, A., Grarup, N., Witte, D. R., Jørgensen, T., Linneberg, A., Lauritzen, T., Sandbaek, A., Hansen, T., Pedersen, O., Elliott, K. S., Kemp, J. P., St Pourcain, B., McMahon, G., Zelenika, D. and 5 morePaternoster, L., Evans, D. M., Aagaard Nohr, E., Holst, C., Gaborieau, V., Brennan, P., Prior Gjesing, A., Grarup, N., Witte, D. R., Jørgensen, T., Linneberg, A., Lauritzen, T., Sandbaek, A., Hansen, T., Pedersen, O., Elliott, K. S., Kemp, J. P., St Pourcain, B., McMahon, G., Zelenika, D., Hager, J., Lathrop, M., Timpson, N. J., Davey Smith, G., & Sørensen, T. I. A. (2011). Genome-Wide Population-Based Association Study of Extremely Overweight Young Adults – The GOYA Study. PLoS ONE, 6(9): e24303. doi:10.1371/journal.pone.0024303.

    Abstract

    Background Thirty-two common variants associated with body mass index (BMI) have been identified in genome-wide association studies, explaining ∼1.45% of BMI variation in general population cohorts. We performed a genome-wide association study in a sample of young adults enriched for extremely overweight individuals. We aimed to identify new loci associated with BMI and to ascertain whether using an extreme sampling design would identify the variants known to be associated with BMI in general populations. Methodology/Principal Findings From two large Danish cohorts we selected all extremely overweight young men and women (n = 2,633), and equal numbers of population-based controls (n = 2,740, drawn randomly from the same populations as the extremes, representing ∼212,000 individuals). We followed up novel (at the time of the study) association signals (p<}0.001) from the discovery cohort in a genome-wide study of 5,846 Europeans, before attempting to replicate the most strongly associated 28 SNPs in an independent sample of Danish individuals (n = 20,917) and a population-based cohort of 15-year-old British adolescents (n = 2,418). Our discovery analysis identified SNPs at three loci known to be associated with BMI with genome-wide confidence (P{<}5×10−8; FTO, MC4R and FAIM2). We also found strong evidence of association at the known TMEM18, GNPDA2, SEC16B, TFAP2B, SH2B1 and KCTD15 loci (p{<}0.001), and nominal association (p{<0.05) at a further 8 loci known to be associated with BMI. However, meta-analyses of our discovery and replication cohorts identified no novel associations. Significance Our results indicate that the detectable genetic variation associated with extreme overweight is very similar to that previously found for general BMI. This suggests that population-based study designs with enriched sampling of individuals with the extreme phenotype may be an efficient method for identifying common variants that influence quantitative traits and a valid alternative to genotyping all individuals in large population-based studies, which may require tens of thousands of subjects to achieve similar power.
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Perniss, P. M., & Ozyurek, A. (2008). Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) sign language narratives. In J. Quer (Ed.), Signs of the time: Selected papers from TISLR 8 (pp. 353-376). Seedorf: Signum Press.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.

Share this page