Publications

Displaying 401 - 500 of 730
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Liebal, K., & Haun, D. B. M. (2012). The importance of comparative psychology for developmental science [Review Article]. International Journal of Developmental Science, 6, 21-23. doi:10.3233/DEV-2012-11088.

    Abstract

    The aim of this essay is to elucidate the relevance of cross-species comparisons for the investigation of human behavior and its development. The focus is on the comparison of human children and another group of primates, the non-human great apes, with special attention to their cognitive skills. Integrating a comparative and developmental perspective, we argue, can provide additional answers to central and elusive questions about human behavior in general and its development in particular: What are the heritable predispositions of the human mind? What cognitive traits are uniquely human? In this sense, Developmental Science would benefit from results of Comparative Psychology.
  • Linkenauger, S. A., Lerner, M. D., Ramenzoni, V. C., & Proffitt, D. R. (2012). A perceptual-motor deficit predicts social and communicative impairments in individuals with autism spectrum disorders. Autism Research, 5, 352-362. doi:10.1002/aur.1248.

    Abstract

    Individuals with autism spectrum disorders (ASDs) have known impairments in social and motor skills. Identifying putative underlying mechanisms of these impairments could lead to improved understanding of the etiology of core social/communicative deficits in ASDs, and identification of novel intervention targets. The ability to perceptually integrate one's physical capacities with one's environment (affordance perception) may be such a mechanism. This ability has been theorized to be impaired in ASDs, but this question has never been directly tested. Crucially, affordance perception has shown to be amenable to learning; thus, if it is implicated in deficits in ASDs, it may be a valuable unexplored intervention target. The present study compared affordance perception in adolescents and adults with ASDs to typically developing (TD) controls. Two groups of individuals (adolescents and adults) with ASDs and age-matched TD controls completed well-established action capability estimation tasks (reachability, graspability, and aperture passability). Their caregivers completed a measure of their lifetime social/communicative deficits. Compared with controls, individuals with ASDs showed unprecedented gross impairments in relating information about their bodies' action capabilities to visual information specifying the environment. The magnitude of these deficits strongly predicted the magnitude of social/communicative impairments in individuals with ASDs. Thus, social/communicative impairments in ASDs may derive, at least in part, from deficits in basic perceptual–motor processes (e.g. action capability estimation). Such deficits may impair the ability to maintain and calibrate the relationship between oneself and one's social and physical environments, and present fruitful, novel, and unexplored target for intervention.
  • Liszkowski, U., Brown, P., Callaghan, T., Takada, A., & De Vos, C. (2012). A prelinguistic gestural universal of human communication. Cognitive Science, 36, 698-713. doi:10.1111/j.1551-6709.2011.01228.x.

    Abstract

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10–14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants’ pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers’ and infants’ pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication.
  • Little, H., Eryilmaz, K., & de Boer, B. (2017). Conventionalisation and Discrimination as Competing Pressures on Continuous Speech-like Signals. Interaction studies, 18(3), 355-378. doi:10.1075/is.18.3.04lit.

    Abstract

    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to be iconic initially because of the constraints of the modality, then iconicity needs to emerge to enable grounding before conventionalisation can occur. Further, pressures for discrimination, caused by the expanding meaning space in our study, may cause more complexity to emerge, again as a result of the restrictive signalling modality. Our findings have possible implications for the processes of conventionalisation possible in signed and spoken languages, as the spoken modality is more restrictive than the manual modality.
  • Little, H., Rasilo, H., van der Ham, S., & Eryılmaz, K. (2017). Empirical approaches for investigating the origins of structure in speech. Interaction studies, 18(3), 332-354. doi:10.1075/is.18.3.03lit.

    Abstract

    In language evolution research, the use of computational and experimental methods to investigate the emergence of structure in language is exploding. In this review, we look exclusively at work exploring the emergence of structure in speech, on both a categorical level (what drives the emergence of an inventory of individual speech sounds), and a combinatorial level (how these individual speech sounds emerge and are reused as part of larger structures). We show that computational and experimental methods for investigating population-level processes can be effectively used to explore and measure the effects of learning, communication and transmission on the emergence of structure in speech. We also look at work on child language acquisition as a tool for generating and validating hypotheses for the emergence of speech categories. Further, we review the effects of noise, iconicity and production effects.
  • Little, H. (2017). Introduction to the Special Issue on the Emergence of Sound Systems. Journal of Language Evolution, 2(1), 1-3. doi:10.1093/jole/lzx014.

    Abstract

    How did human sound systems get to be the way they are? Collecting contributions implementing a wealth of methods to address this question, this special issue treats language and speech as being the result of a complex adaptive system. The work throughout provides evidence and theory at the levels of phylogeny, glossogeny and ontogeny. In taking a multi-disciplinary approach that considers interactions within and between these levels of selection, the papers collectively provide a valuable, integrated contribution to existing work on the evolution of speech and sound systems.
  • Little, H., Eryılmaz, K., & de Boer, B. (2017). Signal dimensionality and the emergence of combinatorial structure. Cognition, 168, 1-15. doi:10.1016/j.cognition.2017.06.011.

    Abstract

    In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants' hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).

    Additional information

    mmc1.zip
  • Little, H. (Ed.). (2017). Special Issue on the Emergence of Sound Systems [Special Issue]. The Journal of Language Evolution, 2(1).
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Ludwig, A., Vernesi, C., Lieckfeldt, D., Lattenkamp, E. Z., Wiethölter, A., & Lutz, W. (2012). Origin and patterns of genetic diversity of German fallow deer as inferred from mitochondrial DNA. European Journal of Wildlife Research, 58(2), 495-501. doi:10.1007/s10344-011-0571-5.

    Abstract

    Although not native to Germany, fallow deer (Dama dama) are commonly found today, but their origin as well as the genetic structure of the founding members is still unclear. In order to address these aspects, we sequenced ~400 bp of the mitochondrial d-loop of 365 animals from 22 locations in nine German Federal States. Nine new haplotypes were detected and archived in GenBank. Our data produced evidence for a Turkish origin of the German founders. However, German fallow deer populations have complex patterns of mtDNA variation. In particular, three distinct clusters were identified: Schleswig-Holstein, Brandenburg/Hesse/Rhineland and Saxony/lower Saxony/Mecklenburg/Westphalia/Anhalt. Signatures of recent demographic expansions were found for the latter two. An overall pattern of reduced genetic variation was therefore accompanied by a relatively strong genetic structure, as highlighted by an overall Phict value of 0.74 (P < 0.001).
  • Lum, J. A., & Kidd, E. (2012). An examination of the associations among multiple memory systems, past tense, and vocabulary in typically developing 5-year-old children. Journal of Speech, Language, and Hearing Research, 55(4), 989-1006. doi:10.1044/1092-4388(2011/10-0137).
  • MacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P. and 1 moreMacLean, E. L., Matthews, L. J., Hare, B. A., Nunn, C. L., Anderson, R. C., Aureli, F., Brannon, E. M., Call, J., Drea, C. M., Emery, N. J., Haun, D. B. M., Herrmann, E., Jacobs, L. F., Platt, M. L., Rosati, A. G., Sandel, A. A., Schroepfer, K. K., Seed, A. M., Tan, J., Van Schaik, C. P., & Wobber, V. (2012). How does cognition evolve? Phylogenetic comparative psychology. Animal Cognition, 15, 223-238. doi:10.1007/s10071-011-0448-8.

    Abstract

    Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.
  • Magyari, L., De Ruiter, J. P., & Levinson, S. C. (2017). Temporal preparation for speaking in question-answer sequences. Frontiers in Psychology, 8: 211. doi:10.3389/fpsyg.2017.00211.

    Abstract

    In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

    Additional information

    presentation 1.pdf
  • Magyari, L., & De Ruiter, J. P. (2012). Prediction of turn-ends based on anticipation of upcoming words. Frontiers in Psychology, 3, 376. doi:10.3389/fpsyg.2012.00376.

    Abstract

    During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor’s turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker’s turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn’s end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf
  • Majid, A., & Enfield, N. J. (2017). Body. In H. Burkhardt, J. Seibt, G. Imaguire, & S. Gerogiorgakis (Eds.), Handbook of mereology (pp. 100-103). Munich: Philosophia.
  • Majid, A. (2012). A guide to stimulus-based elicitation for semantic categories. In N. Thieberger (Ed.), The Oxford handbook of linguistic fieldwork (pp. 54-71). New York: Oxford University Press.
  • Majid, A. (2012). Current emotion research in the language sciences. Emotion Review, 4, 432-443. doi:10.1177/1754073912445827.

    Abstract

    When researchers think about the interaction between language and emotion, they typically focus on descriptive emotion words. This review demonstrates that emotion can interact with language at many levels of structure, from the sound patterns of a language to its lexicon and grammar, and beyond to how it appears in conversation and discourse. Findings are considered from diverse subfields across the language sciences, including cognitive linguistics, psycholinguistics, linguistic anthropology, and conversation analysis. Taken together, it is clear that emotional expression is finely tuned to language-specific structures. Future emotion research can better exploit cross-linguistic variation to unravel possible universal principles operating between language and emotion.
  • Majid, A., Manko, P., & De Valk, J. (2017). Language of the senses. In S. Dekker (Ed.), Scientific breakthroughs in the classroom! (pp. 40-76). Nijmegen: Science Education Hub Radboud University.

    Abstract

    The project that we describe in this chapter has the theme ‘Language of the senses’. This theme is
    based on the research of Asifa Majid and her team regarding the influence of language and culture on
    sensory perception. The chapter consists of two sections. Section 2.1 describes how different sensory
    perceptions are spoken of in different languages. Teachers can use this section as substantive preparation
    before they launch this theme in the classroom. Section 2.2 describes how teachers can handle
    this theme in accordance with the seven phases of inquiry-based learning. Chapter 1, in which the
    general guideline of the seven phases is described, forms the basis for this. We therefore recommend
    the use of chapter 1 as the starting point for the execution of a project in the classroom. This chapter
    provides the thematic additions.

    Additional information

    Materials Language of the senses
  • Majid, A., Manko, P., & de Valk, J. (2017). Taal der Zintuigen. In S. Dekker, & J. Van Baren-Nawrocka (Eds.), Wetenschappelijke doorbraken de klas in! Molecuulbotsingen, Stress en Taal der Zintuigen (pp. 128-166). Nijmegen: Wetenschapsknooppunt Radboud Universiteit.

    Abstract

    Taal der zintuigen gaat over de invloed van taal en cultuur op zintuiglijke waarnemingen. Hoe omschrijf je wat je ziet, voelt, proeft of ruikt? In sommige culturen zijn er veel verschillende woorden voor kleur, in andere culturen juist weer heel weinig. Worden we geboren met deze verschillende kleurgroepen? En bepaalt hoe je ergens over praat ook wat je waarneemt?
  • Majid, A. (2012). The role of language in a science of emotion [Comment]. Emotion review, 4, 380-381. doi:10.1177/1754073912445819.

    Abstract

    Emotion scientists often take an ambivalent stance concerning the role of language in a science of emotion. However, it is important for emotion researchers to contemplate some of the consequences of current practices
    for their theory building. There is a danger of an overreliance on the English language as a transparent window into emotion categories. More consideration has to be given to cross-linguistic comparison in the future so that models of language acquisition and of the language–cognition interface fit better the extant variation found in today’s peoples.
  • Majid, A., Speed, L., Croijmans, I., & Arshamian, A. (2017). What makes a better smeller? Perception, 46, 406-430. doi:10.1177/0301006616688224.

    Abstract

    Olfaction is often viewed as difficult, yet the empirical evidence suggests a different picture. A closer look shows people around the world differ in their ability to detect, discriminate, and name odors. This gives rise to the question of what influences our ability to smell. Instead of focusing on olfactory deficiencies, this review presents a positive perspective by focusing on factors that make someone a better smeller. We consider three driving forces in improving olfactory ability: one’s biological makeup, one’s experience, and the environment. For each factor, we consider aspects proposed to improve odor perception and critically examine the evidence; as well as introducing lesser discussed areas. In terms of biology, there are cases of neurodiversity, such as olfactory synesthesia, that serve to enhance olfactory ability. Our lifetime experience, be it typical development or unique training experience, can also modify the trajectory of olfaction. Finally, our odor environment, in terms of ambient odor or culinary traditions, can influence odor perception too. Rather than highlighting the weaknesses of olfaction, we emphasize routes to harnessing our olfactory potential.
  • Majid, A., Boroditsky, L., & Gaby, A. (Eds.). (2012). Time in terms of space [Research topic] [Special Issue]. Frontiers in cultural psychology. Retrieved from http://www.frontiersin.org/cultural_psychology/researchtopics/Time_in_terms_of_space/755.

    Abstract

    This Research Topic explores the question: what is the relationship between representations of time and space in cultures around the world? This question touches on the broader issue of how humans come to represent and reason about abstract entities – things we cannot see or touch. Time is a particularly opportune domain to investigate this topic. Across cultures, people use spatial representations for time, for example in graphs, time-lines, clocks, sundials, hourglasses, and calendars. In language, time is also heavily related to space, with spatial terms often used to describe the order and duration of events. In English, for example, we might move a meeting forward, push a deadline back, attend a long concert or go on a short break. People also make consistent spatial gestures when talking about time, and appear to spontaneously invoke spatial representations when processing temporal language. A large body of evidence suggests a close correspondence between temporal and spatial language and thought. However, the ways that people spatialize time can differ dramatically across languages and cultures. This research topic identifies and explores some of the sources of this variation, including patterns in spatial thinking, patterns in metaphor, gesture and other cultural systems. This Research Topic explores how speakers of different languages talk about time and space and how they think about these domains, outside of language. The Research Topic invites papers exploring the following issues: 1. Do the linguistic representations of space and time share the same lexical and morphosyntactic resources? 2. To what extent does the conceptualization of time follow the conceptualization of space?
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake - but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 843-847. doi:10.1037/a0029284.

    Abstract

    Are there individual differences in children’s prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff et al., 1987), we found that, upon hearing a sentence like “The boy eats a big cake”, two-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb, eats, and prior to hearing the word, cake. Importantly, children’s prediction skills were significantly correlated with their productive vocabulary size – Skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input while low producers did not. Furthermore, we found that children’s prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
  • Mansbridge, M. P., Tamaoka, K., Xiong, K., & Verdonschot, R. G. (2017). Ambiguity in the processing of Mandarin Chinese relative clauses: One factor cannot explain it all. PLoS One, 12(6): e0178369. doi:10.1371/journal.pone.0178369.

    Abstract

    This study addresses the question of whether native Mandarin Chinese speakers process and comprehend subject-extracted relative clauses (SRC) more readily than objectextracted relative clauses (ORC) in Mandarin Chinese. Presently, this has been a hotly debated issue, with various studies producing contrasting results. Using two eye-tracking experiments with ambiguous and unambiguous RCs, this study shows that both ORCs and SRCs have different processing requirements depending on the locus and time course during reading. The results reveal that ORC reading was possibly facilitated by linear/ temporal integration and canonicity. On the other hand, similarity-based interference made ORCs more difficult, and expectation-based processing was more prominent for unambiguous ORCs. Overall, RC processing in Mandarin should not be broken down to a single ORC (dis) advantage, but understood as multiple interdependent factors influencing whether ORCs are either more difficult or easier to parse depending on the task and context at hand.
  • Marti, M., Alhama, R. G., & Recasens, M. (2012). Los avances tecnológicos y la ciencia del lenguaje. In T. Jiménez Juliá, B. López Meirama, V. Vázquez Rozas, & A. Veiga (Eds.), Cum corde et in nova grammatica. Estudios ofrecidos a Guillermo Rojo (pp. 543-553). Santiago de Compostela: Universidade de Santiago de Compostela.

    Abstract

    La ciencia moderna nace de la conjunción entre postulados teóricos y el desarrollo de una infraestructura tecnológica que permite observar los hechos de manera adecuada, realizar experimentos y verificar las hipótesis. Desde Galileo, ciencia y tecnología han avanzado conjuntamente. En el mundo occidental, la ciencia ha evolucionado desde pro-puestas puramente especulativas (basadas en postulados apriorísticos) hasta el uso de métodos experimentales y estadísticos para explicar mejor nuestras observaciones. La tecnología se hermana con la ciencia facilitando al investigador una aproximación adecuada a los hechos que pretende explicar. Así, Galileo, para observar los cuerpos celestes, mejoró el utillaje óptico, lo que le permitió un acercamiento más preciso al objeto de estudio y, en consecuencia, unos fundamentos más sólidos para su propuesta teórica. De modo similar, actualmente el desarrollo tecnológico digital ha posibilitado la extracción masiva de datos y el análisis estadístico de éstos para verificar las hipótesis de partida: la lingüística no ha podido dar el paso desde la pura especulación hacia el análisis estadístico de los hechos hasta la aparición de las tecnologías digitales.
  • Martin, A. E., & Doumas, L. A. A. (2017). A mechanism for the cortical computation of hierarchical linguistic structure. PLoS Biology, 15(3): e2000663. doi:10.1371/journal.pbio.2000663.

    Abstract

    Biological systems often detect species-specific signals in the environment. In humans, speech and language are species-specific signals of fundamental biological importance. To detect the linguistic signal, human brains must form hierarchical representations from a sequence of perceptual inputs distributed in time. What mechanism underlies this ability? One hypothesis is that the brain repurposed an available neurobiological mechanism when hierarchical linguistic representation became an efficient solution to a computational problem posed to the organism. Under such an account, a single mechanism must have the capacity to perform multiple, functionally related computations, e.g., detect the linguistic signal and perform other cognitive functions, while, ideally, oscillating like the human brain. We show that a computational model of analogy, built for an entirely different purpose—learning relational reasoning—processes sentences, represents their meaning, and, crucially, exhibits oscillatory activation patterns resembling cortical signals elicited by the same stimuli. Such redundancy in the cortical and machine signals is indicative of formal and mechanistic alignment between representational structure building and “cortical” oscillations. By inductive inference, this synergy suggests that the cortical signal reflects structure generation, just as the machine signal does. A single mechanism—using time to encode information across a layered network—generates the kind of (de)compositional representational hierarchy that is crucial for human language and offers a mechanistic linking hypothesis between linguistic representation and cortical computation
  • Martin, A. E., Huettig, F., & Nieuwland, M. S. (2017). Can structural priming answer the important questions about language? A commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e304. doi:10.1017/S0140525X17000528.

    Abstract

    While structural priming makes a valuable contribution to psycholinguistics, it does not allow direct observation of representation, nor escape “source ambiguity.” Structural priming taps into implicit memory representations and processes that may differ from what is used online. We question whether implicit memory for language can and should be equated with linguistic representation or with language processing.
  • Martin, A. E., Nieuwland, M. S., & Carreiras, M. (2012). Event-related brain potentials index cue-based retrieval interference during sentence comprehension. NeuroImage, 59(2), 1859-1869. doi:10.1016/j.neuroimage.2011.08.057.

    Abstract

    Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a (‘another’). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000 ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences.
  • Martin, A. E., Monahan, P. J., & Samuel, A. G. (2017). Prediction of agreement and phonetic overlap shape sublexical identification. Language and Speech, 60(3), 356-376. doi:10.1177/0023830916650714.

    Abstract

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.
  • Massaro, D. W., & Perlman, M. (2017). Quantifying iconicity’s contribution during language acquisition: Implications for vocabulary learning. Frontiers in Communication, 2: 4. doi:10.3389/fcomm.2017.00004.

    Abstract

    Previous research found that iconicity—the motivated correspondence between word form and meaning—contributes to expressive vocabulary acquisition. We present two new experiments with two different databases and with novel analyses to give a detailed quantification of how iconicity contributes to vocabulary acquisition across development, including both receptive understanding and production. The results demonstrate that iconicity is more prevalent early in acquisition and diminishes with increasing age and with increasing vocabulary. In the first experiment, we found that the influence of iconicity on children’s production vocabulary decreased gradually with increasing age. These effects were independent of the observed influence of concreteness, difficulty of articulation, and parental input frequency. Importantly, we substantiated the independence of iconicity, concreteness, and systematicity—a statistical regularity between sounds and meanings. In the second experiment, we found that the average iconicity of both a child’s receptive vocabulary and expressive vocabulary diminished dramatically with increases in vocabulary size. These results indicate that iconic words tend to be learned early in the acquisition of both receptive vocabulary and expressive vocabulary. We recommend that iconicity be included as one of the many different influences on a child’s early vocabulary acquisition. Facing the logically insurmountable challenge to link the form of a novel word (e.g., “gavagai”) with its particular meaning (e.g., “rabbit”; Quine, 1960, 1990/1992), children manage to learn words with incredible ease. Interest in this process has permeated empirical and theoretical research in developmental psychology, psycholinguistics, and language studies more generally. Investigators have studied which words are learned and when they are learned (Fenson et al., 1994), biases in word learning (Markman, 1990, 1991); the perceptual, social, and linguistic properties of the words (Gentner, 1982; Waxman, 1999; Maguire et al., 2006; Vosoughi et al., 2010), the structure of the language being learned (Gentner and Boroditsky, 2001), and the influence of the child’s milieu on word learning (Hart and Risley, 1995; Roy et al., 2015). A growing number of studies also show that the iconicity of words might be a significant factor in word learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014; Perry et al., 2015). Iconicity refers generally to a correspondence between the form of a signal (e.g., spoken word, sign, and written character) and its meaning. For example, the sign for tree is iconic in many signed languages: it resembles a branching tree waving above the ground in American Sign Language, outlines the shape of a tree in Danish Sign Language and forms a tree trunk in Chinese Sign Language. In contrast to signed languages, the words of spoken languages have traditionally been treated as arbitrary, with the assumption that the forms of most words bear no resemblance to their meaning (e.g., Hockett, 1960; Pinker and Bloom, 1990). However, there is now a large body of research showing that iconicity is prevalent in the lexicons of many spoken languages (Nuckolls, 1999; Dingemanse et al., 2015). Most languages have an inventory of iconic words for sounds—onomatopoeic words such as splash, slurp, and moo, which sound somewhat like the sound of the real-world event to which they refer. Rhodes (1994), for example, counts more than 100 of these words in English. Many languages also contain large inventories of ideophones—a distinctively iconic class of words that is used to express a variety of sensorimotor-rich meanings (Nuckolls, 1999; Voeltz and Kilian-Hatz, 2001; Dingemanse, 2012). For example, in Japanese, the word “koron”—with a voiceless [k] refers to a light object rolling once, the reduplicated “korokoro” to a light object rolling repeatedly, and “gorogoro”—with a voiced [g]—to a heavy object rolling repeatedly (Imai and Kita, 2014). And in Siwu, spoken in Ghana, ideophones include words like fwεfwε “springy, elastic” and saaa “cool sensation” (Dingemanse et al., 2015). Outside of onomatopoeia and ideophones, there is also evidence that adjectives and verbs—which also tend to convey sensorimotor imagery—are also relatively iconic (Nygaard et al., 2009; Perry et al., 2015). Another domain of iconic words involves some correspondence between the point of articulation of a word and its meaning. For example, there appears to be some prevalence across languages of nasal consonants in words for nose and bilabial consonants in words for lip (Urban, 2011). Spoken words can also have a correspondence between a word’s meaning and other aspects of its pronunciation. The word teeny, meaning small, is pronounced with a relatively small vocal tract, with high front vowels characterized by retracted lips and a high-frequency second formant (Ohala, 1994). Thus, teeny can be recognized as iconic of “small” (compared to the larger vocal tract configuration of the back, rounded vowel in huge), a pattern that is documented in the lexicons of a diversity of languages (Ultan, 1978; Blasi et al., 2016). Lewis and Frank (2016) have studied a more abstract form of iconicity that more meaningfully complex words tend to be longer. An evaluation of many diverse languages revealed that conceptually more complex meanings tend to have longer spoken forms. In their study, participants tended to assign a relatively long novel word to a conceptually more complex referent. Understanding that more complex meaning is usually represented by a longer word could aid a child’s parsing of a stream of spoken language and thus facilitate word learning. Some developmental psychologists have theorized that iconicity helps young children learn words by “bootstrapping” or “bridging” the association between a symbol and its referent (Imai and Kita, 2014; Perniss and Vigliocco, 2014). According to this idea, children begin to master word learning with the aid of iconic cues, which help to profile the connection between the form of a word and its meaning out in the world. The learning of verbs in particular may benefit from iconicity, as the referents of verbs are more abstract and challenging for young children to identify (Gentner, 1982; Snedeker and Gleitman, 2004). By helping children gain a firmer grasp of the concept of a symbol, iconicity might set the stage for the ensuing word-learning spurt of non-iconic words. The hypothesis that iconicity plays a role in word learning is supported by experimental studies showing that young children are better at learning words—especially verbs—when they are iconic (Imai et al., 2008; Kantartzis et al., 2011; Yoshida, 2012). In one study, for example, 3-year-old Japanese children were taught a set of novel verbs for actions. Some of the words the children learned were iconic (“sound-symbolic”), created on the basis of iconic patterns found in Japanese mimetics (e.g., the novel word nosunosu for a slow manner of walking; Imai et al., 2008). The results showed that children were better able to generalize action words across agents when the verb was iconic of the action compared to when it was not. A subsequent study also using novel verbs based on Japanese mimetics replicated the finding with 3-year-old English-speaking children (Kantartzis et al., 2011). However, it remains to be determined whether children trained in an iconic condition can generalize their learning to a non-iconic condition that would not otherwise be learned. Children as young as 14 months of age have been shown to benefit from iconicity in word learning (Imai et al., 2015). These children were better at learning novel words for spikey and rounded shapes when the words were iconic, corresponding to kiki and bouba sound symbolism (e.g., Köhler, 1947; Ramachandran and Hubbard, 2001). If iconic words are indeed easier to learn, there should be a preponderance of iconic words early in the learning of natural languages. There is evidence that this is the case in signed languages, which are widely recognized to contain a prevalence of iconic signs [Klima and Bellugi, 1979; e.g., as evident in Signing Savvy (2016)]. Although the role of iconicity in sign acquisition has been disputed [e.g., Orlansky and Bonvillian, 1984; see Thompson (2011) for discussion], the most thorough study to date found that signs of British Sign Language (BSL) that were learned earlier by children tended to be more iconic (Thompson et al., 2012). Thompson et al.’s measure of the age of acquisition of signs came from parental reports from a version of the MacArthur-Bates Communicative Development Inventory (MCDI; Fenson et al., 1994) adapted for BSL (Woolfe et al., 2010). The iconicity of signs was taken from norms based on BSL signers’ judgments using a scale of 1 (not at all iconic) to 7 [highly iconic; see Vinson et al. (2008), for norming details and BSL videos]. Thompson et al. (2012) found a positive correlation between iconicity judgments and words understood and produced. This relationship held up even after controlling for the contribution of imageability and familiarity. Surprisingly, however, there was a significantly stronger correlation for older children (21- to 30-month olds) than for younger children (age 11- to 20-month olds). Thompson et al. suggested that the larger role for iconicity for the older children may result from their increasing cognitive abilities or their greater experience in understanding meaningful form-meaning mappings. However, this suggestion does not fit with the expectation that iconicity should play a larger role earlier in language use. Thus, although supporting a role for iconicity in word learning, the larger influence for older children is inconsistent with the bootstrapping hypothesis, in which iconicity should play a larger role earlier in vocabulary learning (Imai and Kita, 2014; Perniss and Vigliocco, 2014). There is also evidence in spoken languages that earlier learned words tend to be more iconic. Perry et al. (2015) collected iconicity ratings on the roughly 600 English and Spanish words that are learned earliest by children, selected from their respective MCDIs. Native speakers on Amazon Mechanical Turk rated the iconicity of the words on a scale from −5 to 5, where 5 indicated that a word was highly iconic, −5 that it sounded like the opposite of its meaning, and 0 that it was completely arbitrary. Their instructions to raters are given in the Appendix because the same instructions were used for acquiring our iconicity ratings. The Perry et al. (2015) results showed that the likelihood of a word in children’s production vocabulary in both English and Spanish at 30 months was positively correlated with the iconicity ratings, even when several other possible contributing factors were partialed out, including log word frequency, concreteness, and word length. The pattern in Spanish held for two collections of iconicity ratings, one with the verbs of the 600-word set presented in infinitive form, and one with the verbs conjugated in the third person singular form. In English, the correlation between age of acquisition and iconicity held when the ratings were collected for words presented in written form only and in written form plus a spoken recording. It also held for ratings based on a more implicit measure of iconicity in which participants rated how accurately a space alien could guess the meaning of the word based on its sound alone. The pattern in English also held when Perry et al. (2015) factored out the systematicity of words [taken from Monaghan et al. (2014)]. Systematicity is measured as a correlation between form similarity and meaning similarity—that is, the degree to which words with similar meanings have similar forms. Monaghan et al. computed systematicity for a large number of English words and found a negative correlation with the age of acquisition of the word from 2 to 13+ years of age—more systematic words are learned earlier. Monaghan et al. (2014) and Christiansen and Chater (2016) observe that consistent sound-meaning patterns may facilitate early vocabulary acquisition, but the child would soon have to master arbitrary relationships necessitated by increases in vocabulary size. In theory, systematicity, sometimes called “relative iconicity,” is independent of iconicity. For example, the English cluster gl– occurs systematically in several words related to “vision” and “light,” such as glitter, glimmer, and glisten (Bergen, 2004), but the segments bear no obvious resemblance to this meaning. Monaghan et al. (2014) question whether spoken languages afford sufficient degrees of articulatory freedom for words to be iconic but not systematic. As evidence, they give the example of onomatopoeic words for the calls of small animals (e.g., peep and cheep) versus calls of big animals (roar and grrr), which would systematically reflect the size of the animal. Although Perry et al. (2015) found a positive effect of iconicity at 30 months, they did not evaluate its influence across the first years of a child’s life. To address this question, we conduct a more detailed examination of the time course of iconicity in word learning across the first 4 years of expressive vocabulary acquisition. In addition, we examine the role of iconicity in the acquisition of receptive vocabulary as well as productive vocabulary. There is some evidence that although receptive vocabulary and productive vocabulary are correlated with one another, a variable might not have equivalent influences on these two expressions of vocabulary. Massaro and Rowe (2015), for example, showed that difficulty of articulation had a strong effect on word production but not word comprehension. Thus, it is possible that the influence of iconicity on vocabulary development differs between production and comprehension. In particular, a larger influence on comprehension might follow from the emphasis of the bootstrapping hypothesis on iconicity serving to perceptually cue children to the connection between the sound of a word and its meaning
  • Matić, D. (2012). Review of: Assertion by Mark Jary, Palgrave Macmillan, 2010 [Web Post]. The LINGUIST List. Retrieved from http://linguistlist.org/pubs/reviews/get-review.cfm?SubID=4547242.

    Abstract

    Even though assertion has held centre stage in much philosophical and linguistic theorising on language, Mark Jary’s ‘Assertion’ represents the first book-length treatment of the topic. The content of the book is aptly described by the author himself: ''This book has two aims. One is to bring together and discuss in a systematic way a range of perspectives on assertion: philosophical, linguistic and psychological. [...] The other is to present a view of the pragmatics of assertion, with particular emphasis on the contribution of the declarative mood to the process of utterance interpretation.'' (p. 1). The promise contained in this introductory note is to a large extent fulfilled: the first seven chapters of the book discuss many of the relevant philosophical and linguistic approaches to assertion and at the same time provide the background for the presentation of Jary's own view on the pragmatics of declaratives, presented in the last (and longest) chapter.
  • McLaughlin, R. L., Schijven, D., Van Rheenen, W., Van Eijk, K. R., O’Brien, M., Project MinE GWAS Consortium, Schizophrenia Working Group of the Psychiatric Genomics Consortium, Kahn, R. S., Ophoff, R. A., Goris, A., Bradley, D. G., Al-Chalabi, A., van den Berg, L. H., Luykx, J. J., Hardiman, O., & Veldink, J. H. (2017). Genetic correlation between amyotrophic lateral sclerosis and schizophrenia. Nature Communications, 8: 14774. doi:10.1038/ncomms14774.

    Abstract

    We have previously shown higher-than-expected rates of schizophrenia in relatives of patients with amyotrophic lateral sclerosis (ALS), suggesting an aetiological relationship between the diseases. Here, we investigate the genetic relationship between ALS and schizophrenia using genome-wide association study data from over 100,000 unique individuals. Using linkage disequilibrium score regression, we estimate the genetic correlation between ALS and schizophrenia to be 14.3% (7.05–21.6; P=1 × 10−4) with schizophrenia polygenic risk scores explaining up to 0.12% of the variance in ALS (P=8.4 × 10−7). A modest increase in comorbidity of ALS and schizophrenia is expected given these findings (odds ratio 1.08–1.26) but this would require very large studies to observe epidemiologically. We identify five potential novel ALS-associated loci using conditional false discovery rate analysis. It is likely that shared neurobiological mechanisms between these two disorders will engender novel hypotheses in future preclinical and clinical studies.
  • McQueen, J. M., & Huettig, F. (2012). Changing only the probability that spoken words will be distorted changes how they are recognized. Journal of the Acoustical Society of America, 131(1), 509-517. doi:10.1121/1.3664087.

    Abstract

    An eye-tracking experiment examined contextual flexibility in speech processing in response to distortions in spoken input. Dutch participants heard Dutch sentences containing critical words and saw four-picture displays. The name of one picture either had the same onset phonemes as the critical word or had a different first phoneme and rhymed. Participants fixated onset-overlap more than rhyme-overlap pictures, but this tendency varied with speech quality. Relative to a baseline with noise-free sentences, participants looked less at onset-overlap and more at rhyme-overlap pictures when phonemes in the sentences (but not in the critical words) were replaced by noises like those heard on a badly-tuned AM radio. The position of the noises (word-initial or word-medial) had no effect. Noises elsewhere in the sentences apparently made evidence about the critical word less reliable: Listeners became less confident of having heard the onset-overlap name but also less sure of having not heard the rhyme-overlap name. The same acoustic information has different effects on spoken-word recognition as the probability of distortion changes.
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., Tyler, M., & Cutler, A. (2012). Lexical retuning of children’s speech perception: Evidence for knowledge about words’ component sounds. Language Learning and Development, 8, 317-339. doi:10.1080/15475441.2011.641887.

    Abstract

    Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment 1 replicated prior studies on lexically guided retuning of speech perception in adults, with a picture-verification methodology suitable for children. One participant group heard an ambiguous fricative ([s/f]) replacing /f/ (e.g., in words like giraffe); another group heard [s/f] replacing /s/ (e.g., in platypus). The first group subsequently identified more tokens on a Simpie-[s/f]impie-Fimpie toy-name continuum as Fimpie. Experiments 2 and 3 found equivalent lexically guided retuning effects in 12- and 6-year-olds. Children aged 6 have all that is needed for adjusting to talker variation in speech: detailed and abstract phonological representations and the ability to apply them during spoken-word recognition.

    Files private

    Request files
  • Mellem, M. S., Bastiaansen, M. C. M., Pilgrim, L. K., Medvedev, A. V., & Friedman, R. B. (2012). Word class and context affect alpha-band oscillatory dynamics in an older population. Frontiers in Psychology, 3, 97. doi:10.3389/fpsyg.2012.00097.

    Abstract

    Differences in the oscillatory EEG dynamics of reading open class (OC) and closed class (CC) words have previously been found (Bastiaansen et al., 2005) and are thought to reflect differences in lexical-semantic content between these word classes. In particular, the theta-band (4–7 Hz) seems to play a prominent role in lexical-semantic retrieval. We tested whether this theta effect is robust in an older population of subjects. Additionally, we examined how the context of a word can modulate the oscillatory dynamics underlying retrieval for the two different classes of words. Older participants (mean age 55) read words presented in either syntactically correct sentences or in a scrambled order (“scrambled sentence”) while their EEG was recorded. We performed time–frequency analysis to examine how power varied based on the context or class of the word. We observed larger power decreases in the alpha (8–12 Hz) band between 200–700 ms for the OC compared to CC words, but this was true only for the scrambled sentence context. We did not observe differences in theta power between these conditions. Context exerted an effect on the alpha and low beta (13–18 Hz) bands between 0 and 700 ms. These results suggest that the previously observed word class effects on theta power changes in a younger participant sample do not seem to be a robust effect in this older population. Though this is an indirect comparison between studies, it may suggest the existence of aging effects on word retrieval dynamics for different populations. Additionally, the interaction between word class and context suggests that word retrieval mechanisms interact with sentence-level comprehension mechanisms in the alpha-band.
  • Menenti, L., Petersson, K. M., & Hagoort, P. (2012). From reference to sense: How the brain encodes meaning for speaking. Frontiers in Psychology, 2, 384. doi:10.3389/fpsyg.2011.00384.

    Abstract

    In speaking, semantic encoding is the conversion of a non-verbal mental representation (the reference) into a semantic structure suitable for expression (the sense). In this fMRI study on sentence production we investigate how the speaking brain accomplishes this transition from non-verbal to verbal representations. In an overt picture description task, we manipulated repetition of sense (the semantic structure of the sentence) and reference (the described situation) separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these two components of semantic encoding. We also performed a control experiment with the same stimuli and design but without any linguistic task to identify areas involved in perception of the stimuli per se. The bilateral inferior parietal lobes were selectively sensitive to repetition of reference, while left inferior frontal gyrus showed selective suppression to repetition of sense. Strikingly, a widespread network of areas associated with language processing (left middle frontal gyrus, bilateral superior parietal lobes and bilateral posterior temporal gyri) all showed repetition suppression to both sense and reference processing. These areas are probably involved in mapping reference onto sense, the crucial step in semantic encoding. These results enable us to track the transition from non-verbal to verbal representations in our brains.
  • Menenti, L., Segaert, K., & Hagoort, P. (2012). The neuronal infrastructure of speaking. Brain and Language, 122, 71-80. doi:10.1016/j.bandl.2012.04.012.

    Abstract

    Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain’s integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we manipulated repetition of sentence meaning, words, and syntax separately. By investigating brain areas showing response adaptation to repetition of each of these sentence properties, we disentangle the neuronal infrastructure for these processes. We demonstrate that semantic, lexical and syntactic processes are carried out in partly overlapping and partly distinct brain networks and show that the classic left-hemispheric dominance for language is present for syntax but not semantics.
  • Menenti, L., Pickering, M. J., & Garrod, S. C. (2012). Towards a neural basis of interactive alignment in conversation. Frontiers in Human Neuroscience, 6, 185. doi:10.3389/fnhum.2012.00185.

    Abstract

    The interactive-alignment account of dialogue proposes that interlocutors achieve conversational success by aligning their understanding of the situation under discussion. Such alignment occurs because they prime each other at different levels of representation (e.g., phonology, syntax, semantics), and this is possible because these representations are shared across production and comprehension. In this paper, we briefly review the behavioral evidence, and then consider how findings from cognitive neuroscience might lend support to this account, on the assumption that alignment of neural activity corresponds to alignment of mental states. We first review work supporting representational parity between production and comprehension, and suggest that neural activity associated with phonological, lexical, and syntactic aspects of production and comprehension are closely related. We next consider evidence for the neural bases of the activation and use of situation models during production and comprehension, and how these demonstrate the activation of non-linguistic conceptual representations associated with language use. We then review evidence for alignment of neural mechanisms that are specific to the act of communication. Finally, we suggest some avenues of further research that need to be explored to test crucial predictions of the interactive alignment account.
  • Menks, W. M., Furger, R., Lenz, C., Fehlbaum, L. V., Stadler, C., & Raschle, N. M. (2017). Microstructural white matter alterations in the corpus callosum of girls with conduct disorder. Journal of the American Academy of Child & Adolescent Psychiatry, 56, 258-265. doi:10.1016/j.jaac.2016.12.006.

    Abstract

    Objective

    Diffusion tensor imaging (DTI) studies in adolescent conduct disorder (CD) have demonstrated white matter alterations of tracts connecting functionally distinct fronto-limbic regions, but only in boys or mixed-gender samples. So far, no study has investigated white matter integrity in girls with CD on a whole-brain level. Therefore, our aim was to investigate white matter alterations in adolescent girls with CD.
    Method

    We collected high-resolution DTI data from 24 girls with CD and 20 typically developing control girls using a 3T magnetic resonance imaging system. Fractional anisotropy (FA) and mean diffusivity (MD) were analyzed for whole-brain as well as a priori−defined regions of interest, while controlling for age and intelligence, using a voxel-based analysis and an age-appropriate customized template.
    Results

    Whole-brain findings revealed white matter alterations (i.e., increased FA) in girls with CD bilaterally within the body of the corpus callosum, expanding toward the right cingulum and left corona radiata. The FA and MD results in a priori−defined regions of interest were more widespread and included changes in the cingulum, corona radiata, fornix, and uncinate fasciculus. These results were not driven by age, intelligence, or attention-deficit/hyperactivity disorder comorbidity.
    Conclusion

    This report provides the first evidence of white matter alterations in female adolescents with CD as indicated through white matter reductions in callosal tracts. This finding enhances current knowledge about the neuropathological basis of female CD. An increased understanding of gender-specific neuronal characteristics in CD may influence diagnosis, early detection, and successful intervention strategies.
  • Merolla, D., & Ameka, F. K. (2012). Reflections on video fieldwork: The making of Verba Africana IV on the Ewe Hogbetsotso Festival. In D. Merolla, J. Jansen, & K. Nait-Zerrad (Eds.), Multimedia research and documentation of oral genres in Africa - The step forward (pp. 123-132). Münster: Lit.
  • Meyer, A. S., Wheeldon, L. R., Van der Meulen, F., & Konopka, A. E. (2012). Effects of speech rate and practice on the allocation of visual attention in multiple object naming. Frontiers in Psychology, 3, 39. doi:10.3389/fpsyg.2012.00039.

    Abstract

    Earlier studies had shown that speakers naming several objects typically look at each object until they have retrieved the phonological form of its name and therefore look longer at objects with long names than at objects with shorter names. We examined whether this tight eye-to-speech coordination was maintained at different speech rates and after increasing amounts of practice. Participants named the same set of objects with monosyllabic or disyllabic names on up to 20 successive trials. In Experiment 1, they spoke as fast as they could, whereas in Experiment 2 they had to maintain a fixed moderate or faster speech rate. In both experiments, the durations of the gazes to the objects decreased with increasing speech rate, indicating that at higher speech rates, the speakers spent less time planning the object names. The eye-speech lag (the time interval between the shift of gaze away from an object and the onset of its name) was independent of the speech rate but became shorter with increasing practice. Consistent word length effects on the durations of the gazes to the objects and the eye speech lags were only found in Experiment 2. The results indicate that shifts of eye gaze are often linked to the completion of phonological encoding, but that speakers can deviate from this default coordination of eye gaze and speech, for instance when the descriptive task is easy and they aim to speak fast.
  • Meyer, A. S., & Gerakaki, S. (2017). The art of conversation: Why it’s harder than you might think. Contact Magazine, 43(2), 11-15. Retrieved from http://contact.teslontario.org/the-art-of-conversation-why-its-harder-than-you-might-think/.
  • Meyer, A. S. (2017). Structural priming is not a Royal Road to representations. Commentary on Branigan and Pickering "An experimental approach to linguistic representation". Behavioral and Brain Sciences, 40: e305. doi:10.1017/S0140525X1700053X.

    Abstract

    Branigan & Pickering (B&P) propose that the structural priming paradigm is a Royal Road to linguistic representations of any kind, unobstructed by in fl uences of psychological processes. In my view, however, they are too optimistic about the versatility of the paradigm and, more importantly, its ability to provide direct evidence about the nature of stored linguistic representations.
  • Minagawa-Kawai, Y., Cristià, A., & Dupoux, E. (2012). Erratum to “Cerebral lateralization and early speech acquisition: A developmental scenario” [Dev. Cogn. Neurosci. 1 (2011) 217–232]. Developmental Cognitive Neuroscience, 2(1), 194-195. doi:10.1016/j.dcn.2011.07.011.

    Abstract

    Refers to Yasuyo Minagawa-Kawai, Alejandrina Cristià, Emmanuel Dupoux "Cerebral lateralization and early speech acquisition: A developmental scenario" Developmental Cognitive Neuroscience, Volume 1, Issue 3, July 2011, Pages 217-232
  • Mishra, R. K., Singh, N., Pandey, A., & Huettig, F. (2012). Spoken language-mediated anticipatory eye movements are modulated by reading ability: Evidence from Indian low and high literates. Journal of Eye Movement Research, 5(1): 3, pp. 1-10. doi:10.16910/jemr.5.1.3.

    Abstract

    We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye
    gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.
  • Mitterer, H. (Ed.). (2012). Ecological aspects of speech perception [Research topic] [Special Issue]. Frontiers in Cognition.

    Abstract

    Our knowledge of speech perception is largely based on experiments conducted with carefully recorded clear speech presented under good listening conditions to undistracted listeners - a near-ideal situation, in other words. But the reality poses a set of different challenges. First of all, listeners may need to divide their attention between speech comprehension and another task (e.g., driving). Outside the laboratory, the speech signal is often slurred by less than careful pronunciation and the listener has to deal with background noise. Moreover, in a globalized world, listeners need to understand speech in more than their native language. Relatedly, the speakers we listen to often have a different language background so we have to deal with a foreign or regional accent we are not familiar with. Finally, outside the laboratory, speech perception is not an end in itself, but rather a mean to contribute to a conversation. Listeners do not only need to understand the speech they are hearing, they also need to use this information to plan and time their own responses. For this special topic, we invite papers that address any of these ecological aspects of speech perception.
  • Mitterer, H., & Tuinman, A. (2012). The role of native-language knowledge in the perception of casual speech in a second language. Frontiers in Psychology, 3, 249. doi:10.3389/fpsyg.2012.00249.

    Abstract

    Casual speech processes, such as /t/-reduction, make word recognition harder. Additionally, word-recognition is also harder in a second language (L2). Combining these challenges, we investigated whether L2 learners have recourse to knowledge from their native language (L1) when dealing with casual-speech processes in their L2. In three experiments, production and perception of /t/-reduction was investigated. An initial production experiment showed that /t/-reduction occurred in both languages and patterned similarly in proper nouns but differed when /t/ was a verbal inflection. Two perception experiments compared the performance of German learners of Dutch with that of native speakers for nouns and verbs. Mirroring the production patterns, German learners' performance strongly resembled that of native Dutch listeners when the reduced /t/ was part of a word stem, but deviated where /t/ was a verbal inflection. These results suggest that a casual speech process in a second language is problematic for learners when the process is not known from the leaner's native language, similar to what has been observed for phoneme contrasts.
  • Moers, C., Meyer, A. S., & Janse, E. (2017). Effects of word frequency and transitional probability on word reading durations of younger and older speakers. Language and Speech, 60(2), 289-317. doi:10.1177/0023830916649215.

    Abstract

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.
  • Moisik, S. R., & Dediu, D. (2017). Anatomical biasing and clicks: Evidence from biomechanical modeling. Journal of Language Evolution, 2(1), 37-51. doi:10.1093/jole/lzx004.

    Abstract

    It has been observed by several researchers that the Khoisan palate tends to lack a prominent alveolar ridge. A biomechanical model of click production was created to examine if these sounds might be subject to an anatomical bias associated with alveolar ridge size. Results suggest the bias is plausible, taking the form of decreased articulatory effort and improved volume change characteristics; however, further modeling and experimental research is required to solidify the claim.

    Additional information

    lzx004_Supp.zip
  • Moisik, S. R., & Gick, B. (2017). The quantal larynx: The stable regions of laryngeal biomechanics and implications for speech production. Journal of Speech, Language, and Hearing Research, 60, 540-560. doi:10.1044/2016_JSLHR-S-16-0019.

    Abstract

    Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal–ventricular stop, and aryepiglotto– epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with “quantal” or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory–biomechanical grounding of numerous phonetic and phonological phenomena
  • Monaghan, P. (2017). Canalization of language structure from environmental constraints: A computational model of word learning from multiple cues. Topics in Cognitive Science, 9(1), 21-34. doi:10.1111/tops.12239.

    Abstract

    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning from cross‐situational, multimodal information was constructed and tested. Key to the model's robustness was the presence of multiple, individually unreliable information sources to support learning. This “degeneracy” in the language system has a detrimental effect on learning, compared to a noise‐free environment, but has a critically important effect on acquisition of a canalized system that is resistant to environmental noise in communication.
  • Monaghan, P., & Rowland, C. F. (2017). Combining language corpora with experimental and computational approaches for language acquisition research. Language Learning, 67(S1), 14-39. doi:10.1111/lang.12221.

    Abstract

    Historically, first language acquisition research was a painstaking process of observation, requiring the laborious hand coding of children's linguistic productions, followed by the generation of abstract theoretical proposals for how the developmental process unfolds. Recently, the ability to collect large-scale corpora of children's language exposure has revolutionized the field. New techniques enable more precise measurements of children's actual language input, and these corpora constrain computational and cognitive theories of language development, which can then generate predictions about learning behavior. We describe several instances where corpus, computational, and experimental work have been productively combined to uncover the first language acquisition process and the richness of multimodal properties of the environment, highlighting how these methods can be extended to address related issues in second language research. Finally, we outline some of the difficulties that can be encountered when applying multimethod approaches and show how these difficulties can be obviated
  • Monaghan, P., Chang, Y.-N., Welbourne, S., & Brysbaert, M. (2017). Exploring the relations between word frequency, language exposure, and bilingualism in a computational model of reading. Journal of Memory and Language, 93, 1-27. doi:10.1016/j.jml.2016.08.003.

    Abstract

    Individuals show differences in the extent to which psycholinguistic variables predict their responses for lexical processing tasks. A key variable accounting for much variance in lexical processing is frequency, but the size of the frequency effect has been demonstrated to reduce as a consequence of the individual’s vocabulary size. Using a connectionist computational implementation of the triangle model on a large set of English words, where orthographic, phonological, and semantic representations interact during processing, we show that the model demonstrates a reduced frequency effect as a consequence of amount of exposure to the language, a variable that was also a cause of greater vocabulary size in the model. The model was also trained to learn a second language, Dutch, and replicated behavioural observations that increased proficiency in a second language resulted in reduced frequency effects for that language but increased frequency effects in the first language. The model provides a first step to demonstrating causal relations between psycholinguistic variables in a model of individual differences in lexical processing, and the effect of bilingualism on interacting variables within the language processing system
  • Mongelli, V., Dehaene, S., Vinckier, F., Peretz, I., Bartolomeo, P., & Cohen, L. (2017). Music and words in the visual cortex: The impact of musical expertise. Cortex, 86, 260-274. doi:10.1016/j.cortex.2016.05.016.

    Abstract

    How does the human visual system accommodate expertise for two simultaneously acquired
    symbolic systems? We used fMRI to compare activations induced in the visual
    cortex by musical notation, written words and other classes of objects, in professional
    musicians and in musically naı¨ve controls. First, irrespective of expertise, selective activations
    for music were posterior and lateral to activations for words in the left occipitotemporal
    cortex. This indicates that symbols characterized by different visual features
    engage distinct cortical areas. Second, musical expertise increased the volume of activations
    for music and led to an anterolateral displacement of word-related activations. In
    musicians, there was also a dramatic increase of the brain-scale networks connected to the
    music-selective visual areas. Those findings reveal that acquiring a double visual expertise
    involves an expansion of category-selective areas, the development of novel long-distance
    functional connectivity, and possibly some competition between categories for the colonization
    of cortical space
  • Montero-Melis, G., & Bylund, E. (2017). Getting the ball rolling: the cross-linguistic conceptualization of caused motion. Language and Cognition, 9(3), 446–472. doi:10.1017/langcog.2016.22.

    Abstract

    Does the way we talk about events correspond to how we conceptualize them? Three experiments (N = 135) examined how Spanish and Swedish native speakers judge event similarity in the domain of caused motion (‘He rolled the tyre into the barn’). Spanish and Swedish motion descriptions regularly encode path (‘into’), but differ in how systematically they include manner information (‘roll’). We designed a similarity arrangement task which allowed participants to give varying weights to different dimensions when gauging event similarity. The three experiments progressively reduced the likelihood that speakers were using language to solve the task. We found that, as long as the use of language was possible (Experiments 1 and 2), Swedish speakers were more likely than Spanish speakers to base their similarity arrangements on object manner (rolling/sliding). However, when recruitment of language was hindered through verbal interference, cross-linguistic differences disappeared (Experiment 3). A compound analysis of all experiments further showed that (i) cross-linguistic differences were played out against a backdrop of commonly represented event components, and (ii) describing vs. not describing the events did not augment cross-linguistic differences, but instead had similar effects across languages. We interpret these findings as suggesting a dynamic role of language in event conceptualization.
  • Montero-Melis, G., Eisenbeiss, S., Narasimhan, B., Ibarretxe-Antuñano, I., Kita, S., Kopecka, A., Lüpke, F., Nikitina, T., Tragel, I., Jaeger, T. F., & Bohnemeyer, J. (2017). Satellite- vs. Verb-Framing Underpredicts Nonverbal Motion Categorization: Insights from a Large Language Sample and Simulations. Cognitive Semantics, 3(1), 36-61. doi:10.1163/23526416-00301002.

    Abstract

    Is motion cognition influenced by the large-scale typological patterns proposed in Talmy’s (2000) two-way distinction between verb-framed (V) and satellite-framed (S) languages? Previous studies investigating this question have been limited to comparing two or three languages at a time and have come to conflicting results. We present the largest cross-linguistic study on this question to date, drawing on data from nineteen genealogically diverse languages, all investigated in the same behavioral paradigm and using the same stimuli. After controlling for the different dependencies in the data by means of multilevel regression models, we find no evidence that S- vs. V-framing affects nonverbal categorization of motion events. At the same time, statistical simulations suggest that our study and previous work within the same behavioral paradigm suffer from insufficient statistical power. We discuss these findings in the light of the great variability between participants, which suggests flexibility in motion representation. Furthermore, we discuss the importance of accounting for language variability, something which can only be achieved with large cross-linguistic samples.
  • Moseley, R., Carota, F., Hauk, O., Mohr, B., & Pulvermüller, F. (2012). A role for the motor system in binding abstract emotional meaning. Cerebral Cortex, 22(7), 1634-1647. doi:10.1093/cercor/bhr238.

    Abstract

    Sensorimotor areas activate to action- and object-related words, but their role in abstract meaning processing is still debated. Abstract emotion words denoting body internal states are a critical test case because they lack referential links to objects. If actions expressing emotion are crucial for learning correspondences between word forms and emotions, emotion word–evoked activity should emerge in motor brain systems controlling the face and arms, which typically express emotions. To test this hypothesis, we recruited 18 native speakers and used event-related functional magnetic resonance imaging to compare brain activation evoked by abstract emotion words to that by face- and arm-related action words. In addition to limbic regions, emotion words indeed sparked precentral cortex, including body-part–specific areas activated somatotopically by face words or arm words. Control items, including hash mark strings and animal words, failed to activate precentral areas. We conclude that, similar to their role in action word processing, activation of frontocentral motor systems in the dorsal stream reflects the semantic binding of sign and meaning of abstract words denoting emotions and possibly other body internal states.
  • Murakami, S., Verdonschot, R. G., Kreiborg, S., Kakimoto, N., & Kawaguchi, A. (2017). Stereoscopy in dental education: An investigation. Journal of Dental Education, 81(4), 450-457. doi:10.21815/JDE.016.002.

    Abstract

    The aim of this study was to investigate whether stereoscopy can play a meaningful role in dental education. The study used an anaglyph technique in which two images were presented separately to the left and right eyes (using red/cyan filters), which, combined in the brain, give enhanced depth perception. A positional judgment task was performed to assess whether the use of stereoscopy would enhance depth perception among dental students at Osaka University in Japan. Subsequently, the optimum angle was evaluated to obtain maximum ability to discriminate among complex anatomical structures. Finally, students completed a questionnaire on a range of matters concerning their experience with stereoscopic images including their views on using stereoscopy in their future careers. The results showed that the students who used stereoscopy were better able than students who did not to appreciate spatial relationships between structures when judging relative positions. The maximum ability to discriminate among complex anatomical structures was between 2 and 6 degrees. The students' overall experience with the technique was positive, and although most did not have a clear vision for stereoscopy in their own practice, they did recognize its merits for education. These results suggest that using stereoscopic images in dental education can be quite valuable as stereoscopy greatly helped these students' understanding of the spatial relationships in complex anatomical structures.
  • Narasimhan, B., Kopecka, A., Bowerman, M., Gullberg, M., & Majid, A. (2012). Putting and taking events: A crosslinguistic perspective. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 1-18). Amsterdam: Benjamins.
  • Narasimhan, B. (2012). Putting and Taking in Tamil and Hindi. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 201-230). Amsterdam: Benjamins.

    Abstract

    Many languages have general or “light” verbs used by speakers to describe a wide range of situations owing to their relatively schematic meanings, e.g., the English verb do that can be used to describe many different kinds of actions, or the verb put that labels a range of types of placement of objects at locations. Such semantically bleached verbs often become grammaticalized and used to encode an extended (set of) meaning(s), e.g., Tamil veyyii ‘put/place’ is used to encode causative meaning in periphrastic causatives (e.g., okkara veyyii ‘make sit’, nikka veyyii ‘make stand’). But do general verbs in different languages have the same kinds of (schematic) meanings and extensional ranges? Or do they reveal different, perhaps even cross-cutting, ways of structuring the same semantic domain in different languages? These questions require detailed crosslinguistic investigation using comparable methods of eliciting data. The present study is a first step in this direction, and focuses on the use of general verbs to describe events of placement and removal in two South Asian languages, Hindi and Tamil.
  • Negwer, M., & Schubert, D. (2017). Talking convergence: Growing evidence links FOXP2 and retinoic acidin shaping speech-related motor circuitry. Frontiers in Neuroscience, 11: 19. doi:10.3389/fnins.2017.00019.

    Abstract

    A commentary on
    FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways

    by Devanna, P., Middelbeek, J., and Vernes, S. C. (2014). Front. Cell. Neurosci. 8:305. doi: 10.3389/fncel.2014.00305
  • Niccolai, V., Klepp, A., Indefrey, P., Schnitzler, A., & Biermann-Ruben, K. (2017). Semantic discrimination impacts tDCS modulation of verb processing. Scientific Reports, 7: 17162. doi:10.1038/s41598-017-17326-w.

    Abstract

    Motor cortex activation observed during body-related verb processing hints at simulation accompanying linguistic understanding. By exploiting the up- and down-regulation that anodal and cathodal transcranial direct current stimulation (tDCS) exert on motor cortical excitability, we aimed at further characterizing the functional contribution of the motor system to linguistic processing. In a double-blind sham-controlled within-subjects design, online stimulation was applied to the left hemispheric hand-related motor cortex of 20 healthy subjects. A dual, double-dissociation task required participants to semantically discriminate concrete (hand/foot) from abstract verb primes as well as to respond with the hand or with the foot to verb-unrelated geometric targets. Analyses were conducted with linear mixed models. Semantic priming was confirmed by faster and more accurate reactions when the response effector was congruent with the verb’s body part. Cathodal stimulation induced faster responses for hand verb primes thus indicating a somatotopical distribution of cortical activation as induced by body-related verbs. Importantly, this effect depended on performance in semantic discrimination. The current results point to verb processing being selectively modifiable by neuromodulation and at the same time to a dependence of tDCS effects on enhanced simulation. We discuss putative mechanisms operating in this reciprocal dependence of neuromodulation and motor resonance.

    Additional information

    41598_2017_17326_MOESM1_ESM.pdf
  • Nieuwland, M. S., Martin, A. E., & Carreiras, M. (2012). Brain regions that process case: Evidence from basque. Human Brain Mapping, 33(11), 2509-2520. doi:10.1002/hbm.21377.

    Abstract

    The aim of this event-related fMRI study was to investigate the cortical networks involved in case processing, an operation that is crucial to language comprehension yet whose neural underpinnings are not well-understood. What is the relationship of these networks to those that serve other aspects of syntactic and semantic processing? Participants read Basque sentences that contained case violations, number agreement violations or semantic anomalies, or that were both syntactically and semantically correct. Case violations elicited activity increases, compared to correct control sentences, in a set of parietal regions including the posterior cingulate, the precuneus, and the left and right inferior parietal lobules. Number agreement violations also elicited activity increases in left and right inferior parietal regions, and additional activations in the left and right middle frontal gyrus. Regions-of-interest analyses showed that almost all of the clusters that were responsive to case or number agreement violations did not differentiate between these two. In contrast, the left and right anterior inferior frontal gyrus and the dorsomedial prefrontal cortex were only sensitive to semantic violations. Our results suggest that whereas syntactic and semantic anomalies clearly recruit distinct neural circuits, case, and number violations recruit largely overlapping neural circuits and that the distinction between the two rests on the relative contributions of parietal and prefrontal regions, respectively. Furthermore, our results are consistent with recently reported contributions of bilateral parietal and dorsolateral brain regions to syntactic processing, pointing towards potential extensions of current neurocognitive theories of language. Hum Brain Mapp, 2012. © 2011 Wiley Periodicals, Inc.
  • Nieuwland, M. S. (2012). Establishing propositional truth-value in counterfactual and real-world contexts during sentence comprehension: Differential sensitivity of the left and right inferior frontal gyri. NeuroImage, 59(4), 3433-3440. doi:10.1016/j.neuroimage.2011.11.018.

    Abstract

    What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences (“If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America”) and real-world sentences (“Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia”) that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension.
  • Nieuwland, M. S., & Martin, A. E. (2012). If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension. Cognition, 122(1), 102-109. doi:10.1016/j.cognition.2011.09.001.

    Abstract

    Propositional truth-value can be a defining feature of a sentence’s relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., “If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America”) and real-world sentences (e.g., “Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia”) alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual.
  • Nieuwland, M. S., & Martin, A. E. (2017). Neural oscillations and a nascent corticohippocampal theory of reference. Journal of Cognitive Neuroscience, 29(5), 896-910. doi:10.1162/jocn_a_01091.

    Abstract

    The ability to use words to refer to the world is vital to the communicative power of human language. In particular, the anaphoric use of words to refer to previously mentioned concepts (antecedents) allows dialogue to be coherent and meaningful. Psycholinguistic theory posits that anaphor comprehension involves reactivating a memory representation of the antecedent. Whereas this implies the involvement of recognition memory, or the mnemonic sub-routines by which people distinguish old from new, the neural processes for reference resolution are largely unknown. Here, we report time-frequency analysis of four EEG experiments to reveal the increased coupling of functional neural systems associated with referentially coherent expressions compared to referentially problematic expressions. Despite varying in modality, language, and type of referential expression, all experiments showed larger gamma-band power for referentially coherent expressions compared to referentially problematic expressions. Beamformer analysis in high-density Experiment 4 localised the gamma-band increase to posterior parietal cortex around 400-600 ms after anaphor-onset and to frontaltemporal cortex around 500-1000 ms. We argue that the observed gamma-band power increases reflect successful referential binding and resolution, which links incoming information to antecedents through an interaction between the brain’s recognition memory networks and frontal-temporal language network. We integrate these findings with previous results from patient and neuroimaging studies, and we outline a nascent cortico-hippocampal theory of reference.
  • Nivard, M. G., Gage, S. H., Hottenga, J. J., van Beijsterveldt, C. E. M., Abdellaoui, A., Bartels, M., Baselmans, B. M. L., Ligthart, L., St Pourcain, B., Boomsma, D. I., Munafò, M. R., & Middeldorp, C. M. (2017). Genetic overlap between schizophrenia and developmental psychopathology: Longitudinal and multivariate polygenic risk prediction of common psychiatric traits during development. Schizophrenia Bulletin, 43(6), 1197-1207. doi:10.1093/schbul/sbx031.

    Abstract

    Background: Several nonpsychotic psychiatric disorders in childhood and adolescence can precede the onset of schizophrenia, but the etiology of this relationship remains unclear. We investigated to what extent the association between schizophrenia and psychiatric disorders in childhood is explained by correlated genetic risk factors. Methods: Polygenic risk scores (PRS), reflecting an individual’s genetic risk for schizophrenia, were constructed for 2588 children from the Netherlands Twin Register (NTR) and 6127 from the Avon Longitudinal Study of Parents And Children (ALSPAC). The associations between schizophrenia PRS and measures of anxiety, depression, attention deficit hyperactivity disorder (ADHD), and oppositional defiant disorder/conduct disorder (ODD/CD) were estimated at age 7, 10, 12/13, and 15 years in the 2 cohorts. Results were then meta-analyzed, and a meta-regression analysis was performed to test differences in effects sizes over, age and disorders. Results: Schizophrenia PRS were associated with childhood and adolescent psychopathology. Meta-regression analysis showed differences in the associations over disorders, with the strongest association with childhood and adolescent depression and a weaker association for ODD/CD at age 7. The associations increased with age and this increase was steepest for ADHD and ODD/CD. Genetic correlations varied between 0.10 and 0.25. Conclusion: By optimally using longitudinal data across diagnoses in a multivariate meta-analysis this study sheds light on the development of childhood disorders into severe adult psychiatric disorders. The results are consistent with a common genetic etiology of schizophrenia and developmental psychopathology as well as with a stronger shared genetic etiology between schizophrenia and adolescent onset psychopathology.
  • Nivard, M. G., Lubke, G. H., Dolan, C. V., Evans, D. M., St Pourcain, B., Munafo, M. R., & Middeldorp, C. M. (2017). Joint developmental trajectories of internalizing and externalizing disorders between childhood and adolescence. Development and Psychopathology, 29(3), 919-928. doi:10.1017/S0954579416000572.

    Abstract

    This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Allophonic mode of speech perception in Dutch children at risk for dyslexia: A longitudinal study. Research in developmental disabilities, 33, 1469-1483. doi:10.1016/j.ridd.2012.03.021.

    Abstract

    There is ample evidence that individuals with dyslexia have a phonological deficit. A growing body of research also suggests that individuals with dyslexia have problems with categorical perception, as evidenced by weaker discrimination of between-category differences and better discrimination of within-category differences compared to average readers. Whether the categorical perception problems of individuals with dyslexia are a result of their reading problems or a cause has yet to be determined. Whether the observed perception deficit relates to a more general auditory deficit or is specific to speech also has yet to be determined. To shed more light on these issues, the categorical perception abilities of children at risk for dyslexia and chronological age controls were investigated before and after the onset of formal reading instruction in a longitudinal study. Both identification and discrimination data were collected using identical paradigms for speech and non-speech stimuli. Results showed the children at risk for dyslexia to shift from an allophonic mode of perception in kindergarten to a phonemic mode of perception in first grade, while the control group showed a phonemic mode already in kindergarten. The children at risk for dyslexia thus showed an allophonic perception deficit in kindergarten, which was later suppressed by phonemic perception as a result of formal reading instruction in first grade; allophonic perception in kindergarten can thus be treated as a clinical marker for the possibility of later reading problems.
  • Noordenbos, M., Segers, E., Serniclaes, W., Mitterer, H., & Verhoeven, L. (2012). Neural evidence of allophonic perception in children at risk for dyslexia. Neuropsychologia, 50, 2010-2017. doi:10.1016/j.neuropsychologia.2012.04.026.

    Abstract

    Learning to read is a complex process that develops normally in the majority of children and requires the mapping of graphemes to their corresponding phonemes. Problems with the mapping process nevertheless occur in about 5% of the population and are typically attributed to poor phonological representations, which are — in turn — attributed to underlying speech processing difficulties. We examined auditory discrimination of speech sounds in 6-year-old beginning readers with a familial risk of dyslexia (n=31) and no such risk (n=30) using the mismatch negativity (MMN). MMNs were recorded for stimuli belonging to either the same phoneme category (acoustic variants of/bə/) or different phoneme categories (/bə/vs./də/). Stimuli from different phoneme categories elicited MMNs in both the control and at-risk children, but the MMN amplitude was clearly lower in the at-risk children. In contrast, the stimuli from the same phoneme category elicited an MMN in only the children at risk for dyslexia. These results show children at risk for dyslexia to be sensitive to acoustic properties that are irrelevant in their language. Our findings thus suggest a possible cause of dyslexia in that they show 6-year-old beginning readers with at least one parent diagnosed with dyslexia to have a neural sensitivity to speech contrasts that are irrelevant in the ambient language. This sensitivity clearly hampers the development of stable phonological representations and thus leads to significant reading impairment later in life.
  • Nora, A., Hultén, A., Karvonen, L., Kim, J.-Y., Lehtonen, M., Yli-Kaitala, H., Service, E., & Salmelin, R. (2012). Long-term phonological learning begins at the level of word form. NeuroImage, 63, 789-799. doi:10.1016/j.neuroimage.2012.07.026.

    Abstract

    Incidental learning of phonological structures through repeated exposure is an important component of native and foreign-language vocabulary acquisition that is not well understood at the neurophysiological level. It is also not settled when this type of learning occurs at the level of word forms as opposed to phoneme sequences. Here, participants listened to and repeated back foreign phonological forms (Korean words) and new native-language word forms (Finnish pseudowords) on two days. Recognition performance was improved, repetition latency became shorter and repetition accuracy increased when phonological forms were encountered multiple times. Cortical magnetoencephalography responses occurred bilaterally but the experimental effects only in the left hemisphere. Superior temporal activity at 300–600 ms, probably reflecting acoustic-phonetic processing, lasted longer for foreign phonology than for native phonology. Formation of longer-term auditory-motor representations was evidenced by a decrease of a spatiotemporally separate left temporal response and correlated increase of left frontal activity at 600–1200 ms on both days. The results point to item-level learning of novel whole-word representations.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Nouaouri, N. (2012). The semantics of placement and removal predicates in Moroccan Arabic. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 99-122). Amsterdam: Benjamins.

    Abstract

    This article explores the expression of placement and removal events in Moroccan Arabic, particularly the semantic features of ‘putting’ and ‘taking’ verbs, classified in accordance with their combination with Goal and/or Source NPs. Moroccan Arabic verbs encode a variety of components of placement and removal events, including containment, attachment, features of the figure, and trajectory. Furthermore, accidental events are distinguished from deliberate events either by the inherent semantics of predicates or denoted syntactically. The postures of the Figures, in spite of some predicates distinguishing them, are typically not specified as they are in other languages, such as Dutch. Although Ground locations are frequently mentioned in both source-oriented and goal-oriented clauses, they are used more often in goal-oriented clauses.
  • Ocklenburg, S., Schmitz, J., Moinfar, Z., Moser, D., Klose, R., Lor, S., Kunz, G., Tegenthoff, M., Faustmann, P., Francks, C., Epplen, J. T., Kumsta, R., & Güntürkün, O. (2017). Epigenetic regulation of lateralized fetal spinal gene expression underlies hemispheric asymmetries. eLife, 6: e22784. doi:10.7554/eLife.22784.001.

    Abstract

    Lateralization is a fundamental principle of nervous system organization but its molecular determinants are mostly unknown. In humans, asymmetric gene expression in the fetal cortex has been suggested as the molecular basis of handedness. However, human fetuses already show considerable asymmetries in arm movements before the motor cortex is functionally linked to the spinal cord, making it more likely that spinal gene expression asymmetries form the molecular basis of handedness. We analyzed genome-wide mRNA expression and DNA methylation in cervical and anterior thoracal spinal cord segments of five human fetuses and show development-dependent gene expression asymmetries. These gene expression asymmetries were epigenetically regulated by miRNA expression asymmetries in the TGF-β signaling pathway and lateralized methylation of CpG islands. Our findings suggest that molecular mechanisms for epigenetic regulation within the spinal cord constitute the starting point for handedness, implying a fundamental shift in our understanding of the ontogenesis of hemispheric asymmetries in humans
  • O’Connor, L. (2012). Take it up, down, and away: Encoding placement and removal in Lowland Chontal. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 297-326). Amsterdam: Benjamins.

    Abstract

    This paper offers a structural and semantic analysis of expressions of caused motion in Lowland Chontal of Oaxaca, an indigenous language of southern Mexico. The data were collected using a video stimulus designed to elicit a wide range of caused motion event descriptions. The most frequent event types in the corpus depict caused motion to and from relations of support and containment, fundamental notions in the de­scription of spatial relations between two entities and critical semantic components of the linguistic encoding of caused motion in this language. Formal features of verbal construction type and argument realization are examined by sorting event descriptions into semantic types of placement and removal, to and from support and to and from containment. Together with typological factors that shape the distribution of spatial semantics and referent expression, separate treatments of support and containment relations serve to clarify notable asymmetries in patterns of predicate type and argument realization.
  • Oliver, G., Gullberg, M., Hellwig, F., Mitterer, H., & Indefrey, P. (2012). Acquiring L2 sentence comprehension: A longitudinal study of word monitoring in noise. Bilingualism: Language and Cognition, 15, 841 -857. doi:10.1017/S1366728912000089.

    Abstract

    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not lexical-semantic, transfer from the L1 to the L2 from the onset of L2 learning. An initial stronger adverse effect of noise on syntactic compared to phonological processing disappeared after two weeks of learning Dutch suggesting a change towards more robust syntactic processing. At the same time the L2 learners started to exploit semantic constraints predicting upcoming target words. The use of semantic predictability remained less efficient compared to native speakers until the end of the observation period. The improvement and the persistent problems in semantic processing we found were independent of noise and rather seem to reflect the need for more context information to build up online semantic representations in L2 listening.
  • O'Meara, C., & Majid, A. (2017). El léxico olfativo en la lengua seri. In A. L. M. D. Ruiz, & A. Z. Pérez (Eds.), La Dimensión Sensorial de la Cultura: Diez contribuciones al estudio de los sentidos en México. (pp. 101-118). Mexico City: Universidad Autónoma Metropolitana.
  • Ortega, G. (2017). Iconicity and sign lexical acquisition: A review. Frontiers in Psychology, 8: 1280. doi:10.3389/fpsyg.2017.01280.

    Abstract

    The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways
  • Ortega, G., Sumer, B., & Ozyurek, A. (2017). Type of iconicity matters in the vocabulary development of signing children. Developmental Psychology, 53(1), 89-99. doi:10.1037/dev0000161.

    Abstract

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children’s preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development.
  • Ostarek, M., & Huettig, F. (2017). Spoken words can make the invisible visible – Testing the involvement of low-level visual representations in spoken word processing. Journal of Experimental Psychology: Human Perception and Performance, 43, 499-508. doi:10.1037/xhp0000313.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" -> picture of a bottle) vs. incongruent ("bottle" -> picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at 600ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215-1224. doi:10.1037/xlm0000375.

    Abstract

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete vs. abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation.

    Additional information

    XLM-2016-2822_supp.docx
  • Ostarek, M., & Vigliocco, G. (2017). Reading sky and seeing a cloud: On the relevance of events for perceptual simulation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 579-590. doi:10.1037/xlm0000318.

    Abstract

    Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In three experiments, participants had to discriminate between two target pictures appearing at the top or the bottom of the screen by pressing the left vs. right button. Immediately before the targets appeared, they saw an up/down word belonging to the target’s event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at 250ms SOA (experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100ms (experiment 2) or 800ms (experiment 3), only a location non-specific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed.
  • Ozker, M., Schepers, I., Magnotti, J., Yoshor, D., & Beauchamp, M. (2017). A double dissociation between anterior and posterior superior temporal gyrus for processing audiovisual speech demonstrated by electrocorticography. Journal of Cognitive Neuroscience, 29(6), 1044-1060. doi:10.1162/jocn_a_01110.

    Abstract

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
  • Ozyurek, A. (2017). Function and processing of gesture in the context of language. In R. B. Church, M. W. Alibali, & S. D. Kelly (Eds.), Why gesture? How the hands function in speaking, thinking and communicating (pp. 39-58). Amsterdam: John Benjamins Publishing. doi:10.1075/gs.7.03ozy.

    Abstract

    Most research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the linguistic encoding of a speaker’s message. Drawing on crosslinguistic research with adults and children as well as bilinguals on iconic/pointing gesture production it shows that the specific language speakers use modulates the rate and the shape of the iconic gesture production of the same events. The findings challenge the claims aiming to understand gesture’s function for “thinking only” in adults and during development.
  • Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.

    Abstract

    Gestures are meaningful movements of the body, the hands, and the face during communication,
    which accompany the production of both spoken and signed utterances. Recent
    research has shown that gestures are an integral part of language and that they contribute
    semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore,
    they reveal internal representations of the language user during communication in ways
    that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes
    research on the role of gesture in spoken languages. Subsequently, it gives an overview
    of how gestural components might manifest themselves in sign languages, that is,
    in a situation in which both gesture and sign are expressed by the same articulators.
    Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and
    spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.
  • Paternoster, L., Zhurov, A., Toma, A., Kemp, J., St Pourcain, B., Timpson, N., McMahon, G., McArdle, W., Ring, S., Smith, G., Richmond, S., & Evans, D. (2012). Genome-wide Association Study of Three-Dimensional Facial Morphology Identifies a Variant in PAX3 Associated with Nasion Position. The American Journal of Human Genetics, 90(3), 478-485. doi:10.1016/j.ajhg.2011.12.021.

    Abstract

    Craniofacial morphology is highly heritable, but little is known about which genetic variants influence normal facial variation in the general population. We aimed to identify genetic variants associated with normal facial variation in a population-based cohort of 15-year-olds from the Avon Longitudinal Study of Parents and Children. 3D high-resolution images were obtained with two laser scanners, these were merged and aligned, and 22 landmarks were identified and their x, y, and z coordinates used to generate 54 3D distances reflecting facial features. 14 principal components (PCs) were also generated from the landmark locations. We carried out genome-wide association analyses of these distances and PCs in 2,185 adolescents and attempted to replicate any significant associations in a further 1,622 participants. In the discovery analysis no associations were observed with the PCs, but we identified four associations with the distances, and one of these, the association between rs7559271 in PAX3 and the nasion to midendocanthion distance (n-men), was replicated (p = 4 × 10−7). In a combined analysis, each G allele of rs7559271 was associated with an increase in n-men distance of 0.39 mm (p = 4 × 10−16), explaining 1.3% of the variance. Independent associations were observed in both the z (nasion prominence) and y (nasion height) dimensions (p = 9 × 10−9 and p = 9 × 10−10, respectively), suggesting that the locus primarily influences growth in the yz plane. Rare variants in PAX3 are known to cause Waardenburg syndrome, which involves deafness, pigmentary abnormalities, and facial characteristics including a broad nasal bridge. Our findings show that common variants within this gene also influence normal craniofacial development.
  • Pederson, E. (1995). Questionnaire on event realization. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 54-60). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004359.

    Abstract

    "Event realisation" refers to the normal final state of the affected entity of an activity described by a verb. For example, the sentence John killed the mosquito entails that the mosquito is afterwards dead – this is the full realisation of a killing event. By contrast, a sentence such as John hit the mosquito does not entail the mosquito’s death (even though we might assume this to be a likely result). In using a certain verb, which features of event realisation are entailed and which are just likely? This questionnaire supports cross-linguistic exploration of event realisation for a range of event types.
  • Peeters, D., Vanlangendonck, F., & Willems, R. M. (2012). Bestaat er een talenknobbel? Over taal in ons brein. In M. Boogaard, & M. Jansen (Eds.), Alles wat je altijd al had willen weten over taal: De taalcanon (pp. 41-43). Amsterdam: Meulenhoff.

    Abstract

    Wanneer iemand goed is in het spreken van meerdere talen, wordt wel gezegd dat zo iemand een talenknobbel heeft. Iedereen weet dat dat niet letterlijk bedoeld is: iemand met een talenknobbel herkennen we niet aan een grote bult op zijn hoofd. Toch dacht men vroeger wel degelijk dat mensen een letterlijke talenknobbel konden ontwikkelen. Een goed ontwikkeld taalvermogen zou gepaard gaan met het groeien van het hersengebied dat hiervoor verantwoordelijk was. Dit deel van het brein zou zelfs zo groot kunnen worden dat het van binnenuit tegen de schedel drukte, met name rond de ogen. Nu weten we wel beter. Maar waar in het brein bevindt de taal zich dan wel precies?
  • Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2017). Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia, 95, 21-29. doi:10.1016/j.neuropsychologia.2016.12.004.

    Abstract

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.
  • Perlman, M. (2017). Debunking two myths against vocal origins of language: Language is iconic and multimodal to the core. Interaction studies, 18(3), 376-401. doi:10.1075/is.18.3.05per.

    Abstract

    Gesture-first theories of language origins often raise two unsubstantiated arguments against vocal origins. First, they argue that great ape vocal behavior is highly constrained, limited to a fixed, species-typical repertoire of reflexive calls. Second, they argue that vocalizations lack any significant potential to ground meaning through iconicity, or resemblance between form and meaning. This paper reviews the considerable evidence that debunks these two “myths”. Accumulating evidence shows that the great apes exercise voluntary control over their vocal behavior, including their breathing, larynx, and supralaryngeal articulators. They are also able to learn new vocal behaviors, and even show some rudimentary ability for vocal imitation. In addition, an abundance of research demonstrates that the vocal modality affords rich potential for iconicity. People can understand iconicity in sound symbolism, and they can produce iconic vocalizations to communicate a diverse range of meanings. Thus, two of the primary arguments against vocal origins theories are not tenable. As an alternative, the paper concludes that the origins of language – going as far back as our last common ancestor with great apes – are rooted in iconicity in both gesture and vocalization.

    Files private

    Request files
  • Perlman, M., & Salmi, R. (2017). Gorillas may use their laryngeal air sacs for whinny-type vocalizations and male display. Journal of Language Evolution, 2(2), 126-140. doi:10.1093/jole/lzx012.

    Abstract

    Great apes and siamangs—but not humans—possess laryngeal air sacs, suggesting that they were lost over hominin evolution. The absence of air sacs in humans may hold clues to speech evolution, but little is known about their functions in extant apes. We investigated whether gorillas use their air sacs to produce the staccato ‘growling’ of the silverback chest beat display. This hypothesis was formulated after viewing a nature documentary showing a display by a silverback western gorilla (Kingo). As Kingo growls, the video shows distinctive vibrations in his chest and throat under which the air sacs extend. We also investigated whether other similarly staccato vocalizations—the whinny, sex whinny, and copulation grunt—might also involve the air sacs. To examine these hypotheses, we collected an opportunistic sample of video and audio evidence from research records and another documentary of Kingo’s group, and from videos of other gorillas found on YouTube. Analysis shows that the four vocalizations are each emitted in rapid pulses of a similar frequency (8–16 pulses per second), and limited visual evidence indicates that they may all occur with upper torso vibrations. Future research should determine how consistently the vibrations co-occur with the vocalizations, whether they are synchronized, and their precise location and timing. Our findings fit with the hypothesis that apes—especially, but not exclusively males—use their air sacs for vocalizations and displays related to size exaggeration for sex and territory. Thus changes in social structure, mating, and sexual dimorphism might have led to the obsolescence of the air sacs and their loss in hominin evolution.
  • Perniss, P. M., Vinson, D., Seifart, F., & Vigliocco, G. (2012). Speaking of shape: The effects of language-specific encoding on semantic representations. Language and Cognition, 4, 223-242. doi:10.1515/langcog-2012-0012.

    Abstract

    The question of whether different linguistic patterns differentially influence semantic and conceptual representations is of central interest in cognitive science. In this paper, we investigate whether the regular encoding of shape within a nominal classification system leads to an increased salience of shape in speakers' semantic representations by comparing English, (Amazonian) Spanish, and Bora, a shape-based classifier language spoken in the Amazonian regions of Columbia and Peru. Crucially, in displaying obligatory use, pervasiveness in grammar, high discourse frequency, and phonological variability of forms corresponding to particular shape features, the Bora classifier system differs in important ways from those in previous studies investigating effects of nominal classification, thereby allowing better control of factors that may have influenced previous findings. In addition, the inclusion of Spanish monolinguals living in the Bora village allowed control for the possibility that differences found between English and Bora speakers may be attributed to their very different living environments. We found that shape is more salient in the semantic representation of objects for speakers of Bora, which systematically encodes shape, than for speakers of English and Spanish, which do not. Our results are consistent with assumptions that semantic representations are shaped and modulated by our specific linguistic experiences.
  • Perniss, P. M. (2012). Use of sign space. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language: an International Handbook (pp. 412-431). Berlin: Mouton de Gruyter.

    Abstract

    This chapter focuses on the semantic and pragmatic uses of space. The questions addressed concern how sign space (i.e. the area of space in front of the signer’s body) is used for meaning construction, how locations in sign space are associated with discourse referents, and how signers choose to structure sign space for their communicative intents. The chapter gives an overview of linguistic analyses of the use of space, starting with the distinction between syntactic and topographic uses of space and the different types of signs that function to establish referent-location associations, and moving to analyses based on mental spaces and conceptual blending theories. Semantic-pragmatic conventions for organizing sign space are discussed, as well as spatial devices notable in the visual-spatial modality (particularly, classifier predicates and signing perspective), which influence and determine the way meaning is created in sign space. Finally, the special role of simultaneity in sign languages is discussed, focusing on the semantic and discourse-pragmatic functions of simultaneous constructions.
  • Petersen, J. H. (2012). How to put and take in Kalasha. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 349-366). Amsterdam: Benjamins.

    Abstract

    In Kalasha, an Indo-Aryan language spoken in Northwest Pakistan, the linguistic encoding of ‘put’ and ‘take’ events reveals a symmetry between lexical ‘put’ and ‘take’ verbs that implies ‘placement on’ and ‘removal from’ a supporting surface. As regards ‘placement in’ and ‘removal from’ an enclosure, the data reveal a lexical asymmetry as ‘take’ verbs display a larger degree of linguistic elaboration of the Figure-Ground relation and the type of caused motion than ‘put’ verbs. When considering syntactic patterns, more instances of asymmetry between these two event types show up. The analysis presented here supports the proposal that an asymmetry exists in the encoding of goals versus sources as suggested in Nam (2004) and Ikegami (1987), but it calls into question the statement put forward by Regier and Zheng (2007) that endpoints (goals) are more finely differentiated semantically than starting points (sources).
  • Petersson, K. M., & Hagoort, P. (2012). The neurobiology of syntax: Beyond string-sets [Review article]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 1971-1883. doi:10.1098/rstb.2012.0101.

    Abstract

    The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.

Share this page