Publications

Displaying 301 - 400 of 628
  • Lehtonen, M., Cunillera, T., Rodríguez-Fornells, A., Hultén, A., Tuomainen, J., & Laine, M. (2007). Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research, 1148, 123-137. doi:10.1016/j.brainres.2007.02.026.

    Abstract

    The temporal dynamics of processing morphologically complex words was investigated by recording event-related brain potentials (ERPs) when native Finnish-speakers performed a visual lexical decision task. Behaviorally, there is evidence that recognition of inflected nouns elicits a processing cost (i.e., longer reaction times and higher error rates) in comparison to matched monomorphemic words. We aimed to reveal whether the processing cost stems from decomposition at the early visual word form level or from recomposition at the later semantic–syntactic level. The ERPs showed no early effects for morphology, but revealed an interaction with word frequency at a late N400-type component, as well as a late positive component that was larger for inflected words. These results suggest that the processing cost stems mainly from the semantic–syntactic level. We also studied the features of the morphological decomposition route by investigating the recognition of pseudowords carrying real morphemes. The results showed no differences between inflected vs. uninflected pseudowords with a false stem, but differences in relation to those with a real stem, suggesting that a recognizable stem is needed to initiate the decomposition route.
  • Lev-Ari, S. (2019). People with larger social networks are better at predicting what someone will say but not how they will say it. Language, Cognition and Neuroscience, 34(1), 101-114. doi:10.1080/23273798.2018.1508733.

    Abstract

    Prediction of upcoming words facilitates language processing. Individual differences in social experience, however, might influence prediction ability by influencing input variability and representativeness. This paper explores how individual differences in social network size influence prediction and how this influence differs across linguistic levels. In Experiment 1, participants predicted likely sentence completions from several plausible endings differing in meaning or only form (e.g. work vs. job). In Experiment 2, participants’ pupil size was measured as they listened to sentences whose ending was the dominant one or deviated from it in either meaning or form. Both experiments show that people with larger social networks are better at predicting upcoming meanings but not the form they would take. The results thus show that people with different social experience process language differently, and shed light on how social dynamics interact with the structure of the linguistic level to influence learning of linguistic patterns.

    Additional information

    plcp_a_1508733_sm8698.docx
  • Levelt, W. J. M. (2019). How Speech Evolved: Some Historical Remarks. Journal of Speech, Language, and Hearing Research, 62(8S), 2926-2931. doi:10.1044/2019_JSLHR-S-CSMC7-19-0017.

    Abstract

    The evolution of speech and language has been a returning topic in the language sciences since the so-called “cognitive revolution.”
  • Levelt, W. J. M. (2019). On empirical methodology, constraints, and hierarchy in artificial grammar learning. Topics in Cognitive Science. doi:10.1111/tops.12441.

    Abstract

    This paper considers the AGL literature from a psycholinguistic perspective. It first presents a taxonomy of the experimental familiarization test procedures used, which is followed by a consideration of shortcomings and potential improvements of the empirical methodology. It then turns to reconsidering the issue of grammar learning from the point of view of acquiring constraints, instead of the traditional AGL approach in terms of acquiring sets of rewrite rules. This is, in particular, a natural way of handling long‐distance dependences. The final section addresses an underdeveloped issue in the AGL literature, namely how to detect latent hierarchical structure in AGL response patterns.
  • Levelt, W. J. M. (1988). Onder sociale wetenschappen. Mededelingen van de Afdeling Letterkunde, 51(2), 41-55.
  • Levinson, S. C. (2007). Cut and break verbs in Yélî Dnye, the Papuan language of Rossel Island. Cognitive Linguistics, 18(2), 207-218. doi:10.1515/COG.2007.009.

    Abstract

    The paper explores verbs of cutting and breaking (C&B, hereafter) in Yeli Dnye, the Papuan language of Rossel Island. The Yeli Dnye verbs covering the C&B domain do not divide it in the expected way, with verbs focusing on special instruments and manners of action on the one hand, and verbs focusing on the resultant state on the other. Instead, just three transitive verbs and their intransitive counterparts cover most of the domain, and they are all based on 'exotic' distinctions in mode of severance[--]coherent severance with the grain vs. against the grain, and incoherent severance (regardless of grain).
  • Levinson, S. C., & Burenhult, N. (2009). Semplates: A new concept in lexical semantics? Language, 85, 153-174. doi:10.1353/lan.0.0090.

    Abstract

    This short report draws attention to an interesting kind of configuration in the lexicon that seems to have escaped theoretical or systematic descriptive attention. These configurations, which we dub SEMPLATES, consist of an abstract structure or template, which is recurrently instantiated in a number of lexical sets, typically of different form classes. A number of examples from different language families are adduced, and generalizations made about the nature of semplates, which are contrasted to other, perhaps similar, phenomena
  • Levshina, N. (2019). Token-based typology and word order entropy: A study based on universal dependencies. Linguistic Typology, 23(3), 533-572. doi:10.1515/lingty-2019-0025.

    Abstract

    The present paper discusses the benefits and challenges of token-based typology, which takes into account the frequencies of words and constructions in language use. This approach makes it possible to introduce new criteria for language classification, which would be difficult or impossible to achieve with the traditional, type-based approach. This point is illustrated by several quantitative studies of word order variation, which can be measured as entropy at different levels of granularity. I argue that this variation can be explained by general functional mechanisms and pressures, which manifest themselves in language use, such as optimization of processing (including avoidance of ambiguity) and grammaticalization of predictable units occurring in chunks. The case studies are based on multilingual corpora, which have been parsed using the Universal Dependencies annotation scheme.

    Additional information

    lingty-2019-0025ad.zip
  • Liang, S., Li, Y., Zhang, Z., Kong, X., Wang, Q., Deng, W., Li, X., Zhao, L., Li, M., Meng, Y., Huang, F., Ma, X., Li, X.-m., Greenshaw, A. J., Shao, J., & Li, T. (2019). Classification of first-episode schizophrenia using multimodal brain features: A combined structural and diffusion imaging study. Schizophrenia Bulletin, 45(3), 591-599. doi:10.1093/schbul/sby091.

    Abstract

    Schizophrenia is a common and complex mental disorder with neuroimaging alterations. Recent neuroanatomical pattern recognition studies attempted to distinguish individuals with schizophrenia by structural magnetic resonance imaging (sMRI) and diffusion tensor imaging (DTI). 1, 2 Applications of cutting-edge machine learning approaches in structural neuroimaging studies have revealed potential pathways to classification of schizophrenia based on regional gray matter volume (GMV) or density or cortical thickness. 3–5 Additionally, cortical folding may have high discriminatory value in correctly identifying symptom severity in schizophrenia. 6 Regional GMV and cortical thickness have also been combined in attempts to differentiate individuals with schizophrenia from healthy controls (HCs). 7 Applications of machine learning algorithms to diffusion imaging data analysis to predict individuals with first-episode schizophrenia (FES) have achieved encouraging accuracy. 8–10 White matter (WM) abnormalities in schizophrenia as estimated by DTI appear to be present in the early stage of the disorder, most likely reflecting the developmental stage of the sample of interest.

    Additional information

    Supplementary data
  • Liang, S., Wang, Q., Kong, X., Deng, W., Yang, X., Li, X., Zhang, Z., Zhang, J., Zhang, C., Li, X.-m., Ma, X., Shao, J., Greenshaw, A. J., & Li, T. (2019). White matter abnormalities in major depression bibotypes identified by Diffusion Tensor Imaging. Neuroscience Bulletin, 35(5), 867-876. doi:10.1007/s12264-019-00381-w.

    Abstract

    Identifying data-driven biotypes of major depressive disorder (MDD) has promise for the clarification of diagnostic heterogeneity. However, few studies have focused on white-matter abnormalities for MDD subtyping. This study included 116 patients with MDD and 118 demographically-matched healthy controls assessed by diffusion tensor imaging and neurocognitive evaluation. Hierarchical clustering was applied to the major fiber tracts, in conjunction with tract-based spatial statistics, to reveal white-matter alterations associated with MDD. Clinical and neurocognitive differences were compared between identified subgroups and healthy controls. With fractional anisotropy extracted from 20 fiber tracts, cluster analysis revealed 3 subgroups based on the patterns of abnormalities. Patients in each subgroup versus healthy controls showed a stepwise pattern of white-matter alterations as follows: subgroup 1 (25.9% of patient sample), widespread white-matter disruption; subgroup 2 (43.1% of patient sample), intermediate and more localized abnormalities in aspects of the corpus callosum and left cingulate; and subgroup 3 (31.0% of patient sample), possible mild alterations, but no statistically significant tract disruption after controlling for family-wise error. The neurocognitive impairment in each subgroup accompanied the white-matter alterations: subgroup 1, deficits in sustained attention and delayed memory; subgroup 2, dysfunction in delayed memory; and subgroup 3, no significant deficits. Three subtypes of white-matter abnormality exist in individuals with major depression, those having widespread abnormalities suffering more neurocognitive impairments, which may provide evidence for parsing the heterogeneity of the disorder and help optimize type-specific treatment approaches.

    Additional information

    12264_2019_381_MOESM1_ESM.pdf
  • Liljeström, M., Hulten, A., Parkkonen, L., & Salmelin, R. (2009). Comparing MEG and fMRI views to naming actions and objects. Human Brain Mapping, 30, 1845-1856. doi:10.1002/hbm.20785.

    Abstract

    Most neuroimaging studies are performed using one imaging method only, either functional magnetic resonance imaging (fMRI), electroencephalography (EEG), or magnetoencephalography (MEG). Information on both location and timing has been sought by recording fMRI and EEG, simultaneously, or MEG and fMRI in separate sessions. Such approaches assume similar active areas whether detected via hemodynamic or electrophysiological signatures. Direct comparisons, after independent analysis of data from each imaging modality, have been conducted primarily on low-level sensory processing. Here, we report MEG (timing and location) and fMRI (location) results in 11 subjects when they named pictures that depicted an action or an object. The experimental design was exactly the same for the two imaging modalities. The MEG data were analyzed with two standard approaches: a set of equivalent current dipoles and a distributed minimum norm estimate. The fMRI blood-oxygenlevel dependent (BOLD) data were subjected to the usual random-effect contrast analysis. At the group level, MEG and fMRI data showed fairly good convergence, with both overall activation patterns and task effects localizing to comparable cortical regions. There were some systematic discrepancies, however, and the correspondence was less compelling in the individual subjects. The present analysis should be helpful in reconciling results of fMRI and MEG studies on high-level cognitive functions
  • Linnér, R. K., Biroli, P., Kong, E., Meddens, S. F. W., Wedow, R., Fontana, M. A., Lebreton, M., Tino, S. P., Abdellaoui, A., Hammerschlag, A. R., Nivard, M. G., Okbay, A., Rietveld, C. A., Timshel, P. N., Trzaskowski, M., De Vlaming, R., Zünd, C. L., Bao, Y., Buzdugan, L., Caplin, A. H. and 72 moreLinnér, R. K., Biroli, P., Kong, E., Meddens, S. F. W., Wedow, R., Fontana, M. A., Lebreton, M., Tino, S. P., Abdellaoui, A., Hammerschlag, A. R., Nivard, M. G., Okbay, A., Rietveld, C. A., Timshel, P. N., Trzaskowski, M., De Vlaming, R., Zünd, C. L., Bao, Y., Buzdugan, L., Caplin, A. H., Chen, C.-Y., Eibich, P., Fontanillas, P., Gonzalez, J. R., Joshi, P. K., Karhunen, V., Kleinman, A., Levin, R. Z., Lill, C. M., Meddens, G. A., Muntané, G., Sanchez-Roige, S., Van Rooij, F. J., Taskesen, E., Wu, Y., Zhang, F., 23and Me Research Team, eQTLgen Consortium, International Cannabis Consortium, Social Science Genetic Association Consortium, Auton, A., Boardman, J. D., Clark, D. W., Conlin, A., Dolan, C. C., Fischbacher, U., Groenen, P. J. F., Harris, K. M., Hasler, G., Hofman, A., Ikram, M. A., Jain, S., Karlsson, R., Kessler, R. C., Kooyman, M., MacKillop, J., Männikkö, M., Morcillo-Suarez, C., McQueen, M. B., Schmidt, K. M., Smart, M. C., Sutter, M., Thurik, A. R., Uitterlinden, A. G., White, J., De Wit, H., Yang, J., Bertram, L., Boomsma, D. I., Esko, T., Fehr, E., Hinds, D. A., Johannesson, M., Kumari, M., Laibson, D., Magnusson, P. K. E., Meyer, M. N., Navarro, A., Palmer, A. A., Pers, T. H., Posthuma, D., Schunk, D., Stein, M. B., Svento, R., Tiemeier, H., Timmers, P. R. H. J., Turley, P., Ursano, R. J., Wagner, G. G., Wilson, J. F., Gratten, J., Lee, J. J., Cesarini, D., Benjamin, D. J., Koellinger, P. D., & Beauchamp, J. P. (2019). Genome-wide association analyses of risk tolerance and risky behaviors in over 1 million individuals identify hundreds of loci and shared genetic influences. Nature Genetics, 51, 245-257. doi:10.1038/s41588-018-0309-3.
  • Liszkowski, U., Schäfer, M., Carpenter, M., & Tomasello, M. (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654-660.

    Abstract

    One of the defining features of human language is displacement, the ability to make reference to absent entities. Here we show that prelinguistic, 12-month-old infants already can use a nonverbal pointing gesture to make reference to absent entities. We also show that chimpanzees—who can point for things they want humans to give them—do not point to refer to absent entities in the same way. These results demonstrate that the ability to communicate about absent but mutually known entities depends not on language, but rather on deeper social-cognitive skills that make acts of linguistic reference possible in the first place. These nonlinguistic skills for displaced reference emerged apparently only after humans' divergence from great apes some 6 million years ago.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1-20. doi:10.1017/S0305000906007689.

    Abstract

    We investigated two main components of infant declarative pointing, reference and attitude, in two experiments with a total of 106 preverbal infants at 1;0. When an experimenter (E) responded to the declarative pointing of these infants by attending to an incorrect referent (with positive attitude), infants repeated pointing within trials to redirect E’s attention, showing an understanding of E’s reference and active message repair. In contrast, when E identified infants’ referent correctly but displayed a disinterested attitude, infants did not repeat pointing within trials and pointed overall in fewer trials, showing an understanding of E’s unenthusiastic attitude about the referent. When E attended to infants’ intended referent AND shared interest in it, infants were most satisfied, showing no message repair within trials and pointing overall in more trials. These results suggest that by twelve months of age infant declarative pointing is a full communicative act aimed at sharing with others both attention to a referent and a specific attitude about that referent.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10(2), F1-F7. doi:0.1111/j.1467-7687.2006.00552.x.

    Abstract

    There is currently controversy over the nature of 1-year-olds' social-cognitive understanding and motives. In this study we investigated whether 12-month-old infants point for others with an understanding of their knowledge states and with a prosocial motive for sharing experiences with them. Declarative pointing was elicited in four conditions created by crossing two factors: an adult partner (1) was already attending to the target event or not, and (2) emoted positively or neutrally. Pointing was also coded after the event had ceased. The findings suggest that 12-month-olds point to inform others of events they do not know about, that they point to share an attitude about mutually attended events others already know about, and that they can point (already prelinguistically) to absent referents. These findings provide strong support for a mentalistic and prosocial interpretation of infants' prelinguistic communication
  • Majid, A., Bowerman, M., Van Staden, M., & Boster, J. S. (2007). The semantic categories of cutting and breaking events: A crosslinguistic perspective. Cognitive Linguistics, 18(2), 133-152. doi:10.1515/COG.2007.005.

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how...
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2007). The linguistic description of minimal social scenarios affects the extent of causal inference making. Journal of Experimental Social Psychology, 43(6), 918-932. doi:10.1016/j.jesp.2006.10.016.

    Abstract

    There is little consensus regarding the circumstances in which people spontaneously generate causal inferences, and in particular whether they generate inferences about the causal antecedents or the causal consequences of events. We tested whether people systematically infer causal antecedents or causal consequences to minimal social scenarios by using a continuation methodology. People overwhelmingly produced causal antecedent continuations for descriptions of interpersonal events (John hugged Mary), but causal consequence continuations to descriptions of transfer events (John gave a book to Mary). This demonstrates that there is no global cognitive style, but rather inference generation is crucially tied to the input. Further studies examined the role of event unusualness, number of participators, and verb-type on the likelihood of producing a causal antecedent or causal consequence inference. We conclude that inferences are critically guided by the specific verb used.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A., Gullberg, M., Van Staden, M., & Bowerman, M. (2007). How similar are semantic categories in closely related languages? A comparison of cutting and breaking in four Germanic languages. Cognitive Linguistics, 18(2), 179-194. doi:10.1515/COG.2007.007.

    Abstract

    Are the semantic categories of very closely related languages the same? We present a new methodology for addressing this question. Speakers of English, German, Dutch and Swedish described a set of video clips depicting cutting and breaking events. The verbs elicited were then subjected to cluster analysis, which groups scenes together based on similarity (determined by shared verbs). Using this technique, we find that there are surprising differences among the languages in the number of categories, their exact boundaries, and the relationship of the terms to one another[--]all of which is circumscribed by a common semantic space.
  • Mak, M., & Willems, R. M. (2019). Mental simulation during literary reading: Individual differences revealed with eye-tracking. Language, Cognition and Neuroscience, 34(4), 511-535. doi:10.1080/23273798.2018.1552007.

    Abstract

    People engage in simulation when reading literary narratives. In this study, we tried to pinpoint how different kinds of simulation (perceptual and motor simulation, mentalising) affect reading behaviour. Eye-tracking (gaze durations, regression probability) and questionnaire data were collected from 102 participants, who read three literary short stories. In a pre-test, 90 additional participants indicated which parts of the stories were high in one of the three kinds of simulation-eliciting content. The results show that motor simulation reduces gaze duration (faster reading), whereas perceptual simulation and mentalising increase gaze duration (slower reading). Individual differences in the effect of simulation on gaze duration were found, which were related to individual differences in aspects of story world absorption and story appreciation. These findings suggest fundamental differences between different kinds of simulation and confirm the role of simulation in absorption and appreciation.
  • Mantegna, F., Hintz, F., Ostarek, M., Alday, P. M., & Huettig, F. (2019). Distinguishing integration and prediction accounts of ERP N400 modulations in language processing through experimental design. Neuropsychologia, 134: 107199. doi:10.1016/j.neuropsychologia.2019.107199.

    Abstract

    Prediction of upcoming input is thought to be a main characteristic of language processing (e.g. Altmann & Mirkovic, 2009; Dell & Chang, 2014; Federmeier, 2007; Ferreira & Chantavarin, 2018; Pickering & Gambi, 2018; Hale, 2001; Hickok, 2012; Huettig 2015; Kuperberg & Jaeger, 2016; Levy, 2008; Norris, McQueen, & Cutler, 2016; Pickering & Garrod, 2013; Van Petten & Luka, 2012). One of the main pillars of experimental support for this notion comes from studies that have attempted to measure electrophysiological markers of prediction when participants read or listened to sentences ending in highly predictable words. The N400, a negative-going and centro-parietally distributed component of the ERP occurring approximately 400ms after (target) word onset, has been frequently interpreted as indexing prediction of the word (or the semantic representations and/or the phonological form of the predicted word, see Kutas & Federmeier, 2011; Nieuwland, 2019; Van Petten & Luka, 2012; for review). A major difficulty for interpreting N400 effects in language processing however is that it has been difficult to establish whether N400 target word modulations conclusively reflect prediction rather than (at least partly) ease of integration. In the present exploratory study, we attempted to distinguish lexical prediction (i.e. ‘top-down’ activation) from lexical integration (i.e. ‘bottom-up’ activation) accounts of ERP N400 modulations in language processing.
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • Martin, A. E., & McElree, B. (2009). Memory operations that support language comprehension: Evidence from verb-phrase ellipsis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(5), 1231-1239. doi:10.1037/a0016271.

    Abstract

    Comprehension of verb-phrase ellipsis (VPE) requires reevaluation of recently processed constituents, which often necessitates retrieval of information about the elided constituent from memory. A. E. Martin and B. McElree (2008) argued that representations formed during comprehension are content addressable and that VPE antecedents are retrieved from memory via a cue-dependent direct-access pointer rather than via a search process. This hypothesis was further tested by manipulating the location of interfering material—either before the onset of the antecedent (proactive interference; PI) or intervening between antecedent and ellipsis site (retroactive interference; RI). The speed–accuracy tradeoff procedure was used to measure the time course of VPE processing. The location of the interfering material affected VPE comprehension accuracy: RI conditions engendered lower accuracy than PI conditions. Crucially, location did not affect the speed of processing VPE, which is inconsistent with both forward and backward search mechanisms. The observed time-course profiles are consistent with the hypothesis that VPE antecedents are retrieved via a cue-dependent direct-access operation. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
  • Martin, A. E., & Baggio, G. (2019). Modeling meaning composition from formalism to mechanism. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20190298. doi:10.1098/rstb.2019.0298.

    Abstract

    Human thought and language have extraordinary expressive power because meaningful parts can be assembled into more complex semantic structures. This partly underlies our ability to compose meanings into endlessly novel configurations, and sets us apart from other species and current computing devices. Crucially, human behaviour, including language use and linguistic data, indicates that composing parts into complex structures does not threaten the existence of constituent parts as independent units in the system: parts and wholes exist simultaneously yet independently from one another in the mind and brain. This independence is evident in human behaviour, but it seems at odds with what is known about the brain's exquisite sensitivity to statistical patterns: everyday language use is productive and expressive precisely because it can go beyond statistical regularities. Formal theories in philosophy and linguistics explain this fact by assuming that language and thought are compositional: systems of representations that separate a variable (or role) from its values (fillers), such that the meaning of a complex expression is a function of the values assigned to the variables. The debate on whether and how compositional systems could be implemented in minds, brains and machines remains vigorous. However, it has not yet resulted in mechanistic models of semantic composition: how, then, are the constituents of thoughts and sentences put and held together? We review and discuss current efforts at understanding this problem, and we chart possible routes for future research.
  • Martin, A. E., & Doumas, L. A. A. (2019). Tensors and compositionality in neural systems. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1791): 20190306. doi:10.1098/rstb.2019.0306.

    Abstract

    Neither neurobiological nor process models of meaning composition specify the operator through which constituent parts are bound together into compositional structures. In this paper, we argue that a neurophysiological computation system cannot achieve the compositionality exhibited in human thought and language if it were to rely on a multiplicative operator to perform binding, as the tensor product (TP)-based systems that have been widely adopted in cognitive science, neuroscience and artificial intelligence do. We show via simulation and two behavioural experiments that TPs violate variable-value independence, but human behaviour does not. Specifically, TPs fail to capture that in the statements fuzzy cactus and fuzzy penguin, both cactus and penguin are predicated by fuzzy(x) and belong to the set of fuzzy things, rendering these arguments similar to each other. Consistent with that thesis, people judged arguments that shared the same role to be similar, even when those arguments themselves (e.g., cacti and penguins) were judged to be dissimilar when in isolation. By contrast, the similarity of the TPs representing fuzzy(cactus) and fuzzy(penguin) was determined by the similarity of the arguments, which in this case approaches zero. Based on these results, we argue that neural systems that use TPs for binding cannot approximate how the human mind and brain represent compositional information during processing. We describe a contrasting binding mechanism that any physiological or artificial neural system could use to maintain independence between a role and its argument, a prerequisite for compositionality and, thus, for instantiating the expressive power of human thought and language in a neural system.

    Additional information

    Supplemental Material
  • Martin, A. E., & Doumas, L. A. A. (2019). Predicate learning in neural systems: Using oscillations to discover latent structure. Current Opinion in Behavioral Sciences, 29, 77-83. doi:10.1016/j.cobeha.2019.04.008.

    Abstract

    Humans learn to represent complex structures (e.g. natural language, music, mathematics) from experience with their environments. Often such structures are latent, hidden, or not encoded in statistics about sensory representations alone. Accounts of human cognition have long emphasized the importance of structured representations, yet the majority of contemporary neural networks do not learn structure from experience. Here, we describe one way that structured, functionally symbolic representations can be instantiated in an artificial neural network. Then, we describe how such latent structures (viz. predicates) can be learned from experience with unstructured data. Our approach exploits two principles from psychology and neuroscience: comparison of representations, and the naturally occurring dynamic properties of distributed computing across neuronal assemblies (viz. neural oscillations). We discuss how the ability to learn predicates from experience, to represent information compositionally, and to extrapolate knowledge to unseen data is core to understanding and modeling the most complex human behaviors (e.g. relational reasoning, analogy, language processing, game play).
  • Martinez-Conde, S., Alexander, R. G., Blum, D., Britton, N., Lipska, B. K., Quirk, G. J., Swiss, J. I., Willems, R. M., & Macknik, S. L. (2019). The storytelling brain: How neuroscience stories help bridge the gap between research and society. The Journal of Neuroscience, 39(42), 8285-8290. doi:10.1523/JNEUROSCI.1180-19.2019.

    Abstract

    Active communication between researchers and society is necessary for the scientific community’s involvement in developing sciencebased
    policies. This need is recognized by governmental and funding agencies that compel scientists to increase their public engagement
    and disseminate research findings in an accessible fashion. Storytelling techniques can help convey science by engaging people’s imagination
    and emotions. Yet, many researchers are uncertain about how to approach scientific storytelling, or feel they lack the tools to
    undertake it. Here we explore some of the techniques intrinsic to crafting scientific narratives, as well as the reasons why scientific
    storytellingmaybe an optimal way of communicating research to nonspecialists.Wealso point out current communication gaps between
    science and society, particularly in the context of neurodiverse audiences and those that include neurological and psychiatric patients.
    Present shortcomings may turn into areas of synergy with the potential to link neuroscience education, research, and advocacy
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2019). How the tracking of habitual rate influences speech perception. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(1), 128-138. doi:10.1037/xlm0000579.

    Abstract

    Listeners are known to track statistical regularities in speech. Yet, which temporal cues
    are encoded is unclear. This study tested effects of talker-specific habitual speech rate
    and talker-independent average speech rate (heard over a longer period of time) on
    the perception of the temporal Dutch vowel contrast /A/-/a:/. First, Experiment 1
    replicated that slow local (surrounding) speech contexts induce fewer long /a:/
    responses than faster contexts. Experiment 2 tested effects of long-term habitual
    speech rate. One high-rate group listened to ambiguous vowels embedded in `neutral'
    speech from talker A, intermixed with speech from fast talker B. Another low-rate group
    listened to the same `neutral' speech from talker A, but to talker B being slow.
    Between-group comparison of the `neutral' trials showed that the high-rate group
    demonstrated a lower proportion of /a:/ responses, indicating that talker A's habitual
    speech rate sounded slower when B was faster. In Experiment 3, both talkers
    produced speech at both rates, removing the different habitual speech rates of talker A
    and B, while maintaining the average rate differing between groups. This time no
    global rate effect was observed. Taken together, the present experiments show that a
    talker's habitual rate is encoded relative to the habitual rate of another talker, carrying
    implications for episodic and constraint-based models of speech perception.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2019). Listeners normalize speech for contextual speech rate even without an explicit recognition task. The Journal of the Acoustical Society of America, 146(1), 179-188. doi:10.1121/1.5116004.

    Abstract

    Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare.
  • Massaro, D. W., & Jesse, A. (2009). Read my lips: Speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication, 51(7), 604-621. doi:10.1016/j.specom.2008.05.013.

    Abstract

    Understanding the lyrics of many contemporary songs is difficult, and an earlier study [Hidalgo-Barnes, M., Massaro, D.W., 2007. Read my lips: an animated face helps communicate musical lyrics. Psychomusicology 19, 3–12] showed a benefit for lyrics recognition when seeing a computer-animated talking head (Baldi) mouthing the lyrics along with hearing the singer. However, the contribution of visual information was relatively small compared to what is usually found for speech. In the current experiments, our goal was to determine why the face appears to contribute less when aligned with sung lyrics than when aligned with normal speech presented in noise. The first experiment compared the contribution of the talking head with the originally sung lyrics versus the case when it was aligned with the Festival text-to-speech synthesis (TtS) spoken at the original duration of the song’s lyrics. A small and similar influence of the face was found in both conditions. In the three experiments, we compared the presence of the face when the durations of the TtS were equated with the duration of the original musical lyrics to the case when the lyrics were read with typical TtS durations and this speech embedded in noise. The results indicated that the unusual temporally distorted durations of musical lyrics decreases the contribution of the visible speech from the face.
  • McKone, E., Wan, L., Pidcock, M., Crookes, K., Reynolds, K., Dawel, A., Kidd, E., & Fiorentini, C. (2019). A critical period for faces: Other-race face recognition is improved by childhood but not adult social contact. Scientific Reports, 9: 12820. doi:10.1038/s41598-019-49202-0.

    Abstract

    Poor recognition of other-race faces is ubiquitous around the world. We resolve a longstanding contradiction in the literature concerning whether interracial social contact improves the other-race effect. For the first time, we measure the age at which contact was experienced. taking advantage of
    unusual demographics allowing dissociation of childhood from adult contact, results show sufficient childhood contact eliminated poor other-race recognition altogether (confirming inter-country adoption
    studies). Critically, however, the developmental window for easy acquisition of other-race faces closed by approximately 12 years of age and social contact as an adult — even over several years and involving many other-race friends — produced no improvement. Theoretically, this pattern of developmental change in plasticity mirrors that found in language, suggesting a shared origin grounded in the
    functional importance of both skills to social communication. Practically, results imply that, where parents wish to ensure their offspring develop the perceptual skills needed to recognise other-race people easily, childhood experience should be encouraged: just as an English-speaking person who moves to France as a child (but not an adult) can easily become a native speaker of French, we can easily
    become “native recognisers” of other-race faces via natural social exposure obtained in childhood, but not later
  • McQueen, J. M., & Viebahn, M. C. (2007). Tracking recognition of spoken words by tracking looks to printed words. Quarterly Journal of Experimental Psychology, 60(5), 661-671. doi:10.1080/17470210601183890.

    Abstract

    Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
  • McQueen, J. M., Jesse, A., & Norris, D. (2009). No lexical–prelexical feedback during speech perception or: Is it time to stop playing those Christmas tapes? Journal of Memory and Language, 61, 1-18. doi:10.1016/j.jml.2009.03.002.

    Abstract

    The strongest support for feedback in speech perception comes from evidence of apparent lexical influence on prelexical fricative-stop compensation for coarticulation. Lexical knowledge (e.g., that the ambiguous final fricative of Christma? should be [s]) apparently influences perception of following stops. We argue that all such previous demonstrations can be explained without invoking lexical feedback. In particular, we show that one demonstration [Magnuson, J. S., McMurray, B., Tanenhaus, M. K., & Aslin, R. N. (2003). Lexical effects on compensation for coarticulation: The ghost of Christmash past. Cognitive Science, 27, 285–298] involved experimentally-induced biases (from 16 practice trials) rather than feedback. We found that the direction of the compensation effect depended on whether practice stimuli were words or nonwords. When both were used, there was no lexically-mediated compensation. Across experiments, however, there were lexical effects on fricative identification. This dissociation (lexical involvement in the fricative decisions but not in the following stop decisions made on the same trials) challenges interactive models in which feedback should cause both effects. We conclude that the prelexical level is sensitive to experimentally-induced phoneme-sequence biases, but that there is no feedback during speech perception.
  • Mead, S., Poulter, M., Uphill, J., Beck, J., Whitfield, J., Webb, T. E., Campbell, T., Adamson, G., Deriziotis, P., Tabrizi, S. J., Hummerich, H., Verzilli, C., Alpers, M. P., Whittaker, J. C., & Collinge, J. (2009). Genetic risk factors for variant Creutzfeldt-Jakob disease: A genome-wide association study. Lancet Neurology, 8(1), 57-66. doi:10.1016/S1474-4422(08)70265-5.

    Abstract

    BACKGROUND: Human and animal prion diseases are under genetic control, but apart from PRNP (the gene that encodes the prion protein), we understand little about human susceptibility to bovine spongiform encephalopathy (BSE) prions, the causal agent of variant Creutzfeldt-Jakob disease (vCJD).METHODS: We did a genome-wide association study of the risk of vCJD and tested for replication of our findings in samples from many categories of human prion disease (929 samples) and control samples from the UK and Papua New Guinea (4254 samples), including controls in the UK who were genotyped by the Wellcome Trust Case Control Consortium. We also did follow-up analyses of the genetic control of the clinical phenotype of prion disease and analysed candidate gene expression in a mouse cellular model of prion infection. FINDINGS: The PRNP locus was strongly associated with risk across several markers and all categories of prion disease (best single SNP [single nucleotide polymorphism] association in vCJD p=2.5 x 10(-17); best haplotypic association in vCJD p=1 x 10(-24)). Although the main contribution to disease risk was conferred by PRNP polymorphic codon 129, another nearby SNP conferred increased risk of vCJD. In addition to PRNP, one technically validated SNP association upstream of RARB (the gene that encodes retinoic acid receptor beta) had nominal genome-wide significance (p=1.9 x 10(-7)). A similar association was found in a small sample of patients with iatrogenic CJD (p=0.030) but not in patients with sporadic CJD (sCJD) or kuru. In cultured cells, retinoic acid regulates the expression of the prion protein. We found an association with acquired prion disease, including vCJD (p=5.6 x 10(-5)), kuru incubation time (p=0.017), and resistance to kuru (p=2.5 x 10(-4)), in a region upstream of STMN2 (the gene that encodes SCG10). The risk genotype was not associated with sCJD but conferred an earlier age of onset. Furthermore, expression of Stmn2 was reduced 30-fold post-infection in a mouse cellular model of prion disease. INTERPRETATION: The polymorphic codon 129 of PRNP was the main genetic risk factor for vCJD; however, additional candidate loci have been identified, which justifies functional analyses of these biological pathways in prion disease.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of Cognitive Neuroscience, 21, 2358-2368. doi:10.1162/jocn.2008.21163.

    Abstract

    Both local discourse and world knowledge are known to influence sentence processing. We investigated how these two sources of information conspire in language comprehension. Two types of critical sentences, correct and world knowledge anomalies, were preceded by either a neutral or a local context. The latter made the world knowledge anomalies more acceptable or plausible. We predicted that the effect of world knowledge anomalies would be weaker for the local context. World knowledge effects have previously been observed in the left inferior frontal region (Brodmann's area 45/47). In the current study, an effect of world knowledge was present in this region in the neutral context. We also observed an effect in the right inferior frontal gyrus, which was more sensitive to the discourse manipulation than the left inferior frontal gyrus. In addition, the left angular gyrus reacted strongly to the degree of discourse coherence between the context and critical sentence. Overall, both world knowledge and the discourse context affect the process of meaning unification, but do so by recruiting partly different sets of brain areas.
  • Menon, S., Rosenberg, K., Graham, S. A., Ward, E. M., Taylor, M. E., Drickamer, K., & Leckband, D. E. (2009). Binding-site geometry and flexibility in DC-SIGN demonstrated with surface force measurements. PNAS, 106, 11524-11529. doi:10.1073/pnas.0901783106.

    Abstract

    The dendritic cell receptor DC-SIGN mediates pathogen recognition by binding to glycans characteristic of pathogen surfaces, including those found on HIV. Clustering of carbohydrate-binding sites in the receptor tetramer is believed to be critical for targeting of pathogen glycans, but the arrangement of these sites remains poorly understood. Surface force measurements between apposed lipid bilayers displaying the extracellular domain of DC-SIGN and a neoglycolipid bearing an oligosaccharide ligand provide evidence that the receptor is in an extended conformation and that glycan docking is associated with a conformational change that repositions the carbohydrate-recognition domains during ligand binding. The results further show that the lateral mobility of membrane-bound ligands enhances the engagement of multiple carbohydrate-recognition domains in the receptor oligomer with appropriately spaced ligands. These studies highlight differences between pathogen targeting by DC-SIGN and receptors in which binding sites at fixed spacing bind to simple molecular patterns

    Additional information

    Menon_2009_Supporting_Information.pdf
  • Merkx, D., & Frank, S. L. (2019). Learning semantic sentence representations from visually grounded language without lexical knowledge. Natural Language Engineering, 25, 451-466. doi:10.1017/S1351324919000196.

    Abstract

    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state of the art on two popular image-caption retrieval benchmark datasets: Microsoft Common Objects in Context (MSCOCO) and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity (STS) benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics.
  • Meyer, A. S., & Damian, M. F. (2007). Activation of distractor names in the picture-picture interference paradigm. Memory & Cognition, 35, 494-503.

    Abstract

    In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access—that is, under the assumption that the recognition of an object leads to the activation of its name.
  • Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14, 710-716.

    Abstract

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.
  • Meyer, A. S., Roelofs, A., & Brehm, L. (2019). Thirty years of Speaking: An introduction to the special issue. Language, Cognition and Neuroscience, 34(9), 1073-1084. doi:10.1080/23273798.2019.1652763.

    Abstract

    Thirty years ago, Pim Levelt published Speaking. During the 10th International Workshop on Language Production held at the Max Planck Institute for Psycholinguistics in Nijmegen in July 2018, researchers reflected on the impact of the book in the field, developments since its publication, and current research trends. The contributions in this Special Issue are closely related to the presentations given at the workshop. In this editorial, we sketch the research agenda set by Speaking, review how different aspects of this agenda are taken up in the papers in this volume and outline directions for further research.
  • Meyer, A. S., Belke, E., Häcker, C., & Mortensen, L. (2007). Use of word length information in utterance planning. Journal of Memory and Language, 57, 210-231. doi:10.1016/j.jml.2006.10.005.

    Abstract

    Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment I of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). Bridging the gap between second language acquisition research and memory science: The case of foreign language attrition. Frontiers in Human Neuroscience, 13: 397. doi:10.3389/fnhum.2019.00397.

    Abstract

    The field of second language acquisition (SLA) is by nature of its subject a highly interdisciplinary area of research. Learning a (foreign) language, for example, involves encoding new words, consolidating and committing them to long-term memory, and later retrieving them. All of these processes have direct parallels in the domain of human memory and have been thoroughly studied by researchers in that field. Yet, despite these clear links, the two fields have largely developed in parallel and in isolation from one another. The present paper aims to promote more cross-talk between SLA and memory science. We focus on foreign language (FL) attrition as an example of a research topic in SLA where the parallels with memory science are especially apparent. We discuss evidence that suggests that competition between languages is one of the mechanisms of FL attrition, paralleling the interference process thought to underlie forgetting in other domains of human memory. Backed up by concrete suggestions, we advocate the use of paradigms from the memory literature to study these interference effects in the language domain. In doing so, we hope to facilitate future cross-talk between the two fields, and to further our understanding of FL attrition as a memory phenomenon.
  • Middeldorp, C. M., Felix, J. F., Mahajan, A., EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, Early Growth Genetics (EGG) consortium, & McCarthy, M. I. (2019). The Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia: Design, results and future prospects. European Journal of Epidemiology, 34(3), 279-300. doi:10.1007/s10654-019-00502-9.

    Abstract

    The impact of many unfavorable childhood traits or diseases, such as low birth weight and mental disorders, is not limited to childhood and adolescence, as they are also associated with poor outcomes in adulthood, such as cardiovascular disease. Insight into the genetic etiology of childhood and adolescent traits and disorders may therefore provide new perspectives, not only on how to improve wellbeing during childhood, but also how to prevent later adverse outcomes. To achieve the sample sizes required for genetic research, the Early Growth Genetics (EGG) and EArly Genetics and Lifecourse Epidemiology (EAGLE) consortia were established. The majority of the participating cohorts are longitudinal population-based samples, but other cohorts with data on early childhood phenotypes are also involved. Cohorts often have a broad focus and collect(ed) data on various somatic and psychiatric traits as well as environmental factors. Genetic variants have been successfully identified for multiple traits, for example, birth weight, atopic dermatitis, childhood BMI, allergic sensitization, and pubertal growth. Furthermore, the results have shown that genetic factors also partly underlie the association with adult traits. As sample sizes are still increasing, it is expected that future analyses will identify additional variants. This, in combination with the development of innovative statistical methods, will provide detailed insight on the mechanisms underlying the transition from childhood to adult disorders. Both consortia welcome new collaborations. Policies and contact details are available from the corresponding authors of this manuscript and/or the consortium websites.
  • Minutjukur, M., Tjitayi, K., Tjitayi, U., & Defina, R. (2019). Pitjantjatjara language change: Some observations and recommendations. Australian Aboriginal Studies, (1), 82-91.
  • Misersky, J., Majid, A., & Snijders, T. M. (2019). Grammatical gender in German influences how role-nouns are interpreted: Evidence from ERPs. Discourse Processes, 56(8), 643-654. doi:10.1080/0163853X.2018.1541382.

    Abstract

    Grammatically masculine role-nouns (e.g., Studenten-masc.‘students’) can refer to men and women, but may favor an interpretation where only men are considered the referent. If true, this has implications for a society aiming to achieve equal representation in the workplace since, for example, job adverts use such role descriptions. To investigate the interpretation of role-nouns, the present ERP study assessed grammatical gender processing in German. Twenty participants read sentences where a role-noun (masculine or feminine) introduced a group of people, followed by a congruent (masculine–men, feminine–women) or incongruent (masculine–women, feminine–men) continuation. Both for feminine-men and masculine-women continuations a P600 (500 to 800 ms) was observed; another positivity was already present from 300 to 500 ms for feminine-men continuations, but critically not for masculine-women continuations. The results imply a male-biased rather than gender-neutral interpretation of the masculine—despite widespread usage of the masculine as a gender-neutral form—suggesting masculine forms are inadequate for representing genders equally.
  • Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, 4(11), e7785. doi:10.1371/journal.pone.0007785.

    Abstract

    Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.
  • Mitterer, H., & McQueen, J. M. (2009). Processing reduced word-forms in speech perception using probabilistic knowledge about speech production. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 244-263. doi:10.1037/a0012730.

    Abstract

    Two experiments examined how Dutch listeners deal with the effects of connected-speech processes, specifically those arising from word-final /t/ reduction (e.g., whether Dutch [tas] is tas, bag, or a reduced-/t/ version of tast, touch). Eye movements of Dutch participants were tracked as they looked at arrays containing 4 printed words, each associated with a geometrical shape. Minimal pairs (e.g., tas/tast) were either both above (boven) or both next to (naast) different shapes. Spoken instructions (e.g., “Klik op het woordje tas boven de ster,” [Click on the word bag above the star]) thus became unambiguous only on their final words. Prior to disambiguation, listeners' fixations were drawn to /t/-final words more when boven than when naast followed the ambiguous sequences. This behavior reflects Dutch speech-production data: /t/ is reduced more before /b/ than before /n/. We thus argue that probabilistic knowledge about the effect of following context in speech production is used prelexically in perception to help resolve lexical ambiguities caused by continuous-speech processes.
  • Mitterer, H., Horschig, J. M., Müsseler, J., & Majid, A. (2009). The influence of memory on perception: It's not what things look like, it's what you call them. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(6), 1557-1562. doi:10.1037/a0017019.

    Abstract

    World knowledge influences how we perceive the world. This study shows that this influence is at least partly mediated by declarative memory. Dutch and German participants categorized hues from a yellow-to-orange continuum on stimuli that were prototypically orange or yellow and that were also associated with these color labels. Both groups gave more “yellow” responses if an ambiguous hue occurred on a prototypically yellow stimulus. The language groups were also tested on a stimulus (traffic light) that is associated with the label orange in Dutch and with the label yellow in German, even though the objective color is the same for both populations. Dutch observers categorized this stimulus as orange more often than German observers, in line with the assumption that declarative knowledge mediates the influence of world knowledge on color categorization.

    Files private

    Request files
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Monaghan, P., & Fletcher, M. (2019). Do sound symbolism effects for written words relate to individual phonemes or to phoneme features? Language and Cognition, 11(2), 235-255. doi:10.1017/langcog.2019.20.

    Abstract

    The sound of words has been shown to relate to the meaning that the words denote, an effect that extends beyond morphological properties of the word. Studies of these sound-symbolic relations have described this iconicity in terms of individual phonemes, or alternatively due to acoustic properties (expressed in phonological features) relating to meaning. In this study, we investigated whether individual phonemes or phoneme features best accounted for iconicity effects. We tested 92 participants’ judgements about the appropriateness of 320 nonwords presented in written form, relating to 8 different semantic attributes. For all 8 attributes, individual phonemes fitted participants’ responses better than general phoneme features. These results challenge claims that sound-symbolic effects for visually presented words can access broad, cross-modal associations between sound and meaning, instead the results indicate the operation of individual phoneme to meaning relations. Whether similar effects are found for nonwords presented auditorially remains an open question.
  • Monaghan, P., & Roberts, S. G. (2019). Cognitive influences in language evolution: Psycholinguistic predictors of loan word borrowing. Cognition, 186, 147-158. doi:10.1016/j.cognition.2019.02.007.

    Abstract

    Languages change due to social, cultural, and cognitive influences. In this paper, we provide an assessment of these cognitive influences on diachronic change in the vocabulary. Previously, tests of stability and change of vocabulary items have been conducted on small sets of words where diachronic change is imputed from cladistics studies. Here, we show for a substantially larger set of words that stability and change in terms of documented borrowings of words into English and into Dutch can be predicted by psycholinguistic properties of words that reflect their representational fidelity. We found that grammatical category, word length, age of acquisition, and frequency predict borrowing rates, but frequency has a non-linear relationship. Frequency correlates negatively with probability of borrowing for high-frequency words, but positively for low-frequency words. This borrowing evidence documents recent, observable diachronic change in the vocabulary enabling us to distinguish between change associated with transmission during language acquisition and change due to innovations by proficient speakers.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Morgan, T. J. H., Acerbi, A., & Van Leeuwen, E. J. C. (2019). Copy-the-majority of instances or individuals? Two approaches to the majority and their consequences for conformist decision-making. PLoS One, 14(1): e021074. doi:10.1371/journal.pone.0210748.

    Abstract

    Cultural evolution is the product of the psychological mechanisms that underlie individual decision making. One commonly studied learning mechanism is a disproportionate preference for majority opinions, known as conformist transmission. While most theoretical and experimental work approaches the majority in terms of the number of individuals that perform a behaviour or hold a belief, some recent experimental studies approach the majority in terms of the number of instances a behaviour is performed. Here, we use a mathematical model to show that disagreement between these two notions of the majority can arise when behavioural variants are performed at different rates, with different salience or in different contexts (variant overrepresentation) and when a subset of the population act as demonstrators to the whole population (model biases). We also show that because conformist transmission changes the distribution of behaviours in a population, how observers approach the majority can cause populations to diverge, and that this can happen even when the two approaches to the majority agree with regards to which behaviour is in the majority. We discuss these results in light of existing findings, ranging from political extremism on twitter to studies of animal foraging behaviour. We conclude that the factors we considered (variant overrepresentation and model biases) are plausibly widespread. As such, it is important to understand how individuals approach the majority in order to understand the effects of majority influence in cultural evolution.
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Nakamoto, T., Suei, Y., Konishi, M., Kanda, T., Verdonschot, R. G., & Kakimoto, N. (2019). Abnormal positioning of the common carotid artery clinically diagnosed as a submandibular mass. Oral Radiology, 35(3), 331-334. doi:10.1007/s11282-018-0355-7.

    Abstract

    The common carotid artery (CCA) usually runs along the long axis of the neck, although it is occasionally found in an abnormal position or is displaced. We report a case of an 86-year-old woman in whom the CCA was identified in the submandibular area. The patient visited our clinic and reported soft tissue swelling in the right submandibular area. It resembled a tumor mass or a swollen lymph node. Computed tomography showed that it was the right CCA that had been bent forward and was running along the submandibular subcutaneous area. Ultrasonography verified the diagnosis. No other lesions were found on the diagnostic images. Consequently, the patient was diagnosed as having abnormal CCA positioning. Although this condition generally requires no treatment, it is important to follow-up the abnormality with diagnostic imaging because of the risk of cerebrovascular disorders.
  • Nakamoto, T., Taguchi, A., Verdonschot, R. G., & Kakimoto, N. (2019). Improvement of region of interest extraction and scanning method of computer-aided diagnosis system for osteoporosis using panoramic radiographs. Oral Radiology, 35(2), 143-151. doi:10.1007/s11282-018-0330-3.

    Abstract

    ObjectivesPatients undergoing osteoporosis treatment benefit greatly from early detection. We previously developed a computer-aided diagnosis (CAD) system to identify osteoporosis using panoramic radiographs. However, the region of interest (ROI) was relatively small, and the method to select suitable ROIs was labor-intensive. This study aimed to expand the ROI and perform semi-automatized extraction of ROIs. The diagnostic performance and operating time were also assessed.MethodsWe used panoramic radiographs and skeletal bone mineral density data of 200 postmenopausal women. Using the reference point that we defined by averaging 100 panoramic images as the lower mandibular border under the mental foramen, a 400x100-pixel ROI was automatically extracted and divided into four 100x100-pixel blocks. Valid blocks were analyzed using program 1, which examined each block separately, and program 2, which divided the blocks into smaller segments and performed scans/analyses across blocks. Diagnostic performance was evaluated using another set of 100 panoramic images.ResultsMost ROIs (97.0%) were correctly extracted. The operation time decreased to 51.4% for program 1 and to 69.3% for program 2. The sensitivity, specificity, and accuracy for identifying osteoporosis were 84.0, 68.0, and 72.0% for program 1 and 92.0, 62.7, and 70.0% for program 2, respectively. Compared with the previous conventional system, program 2 recorded a slightly higher sensitivity, although it occasionally also elicited false positives.ConclusionsPatients at risk for osteoporosis can be identified more rapidly using this new CAD system, which may contribute to earlier detection and intervention and improved medical care.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Nayernia, L., Van den Vijver, R., & Indefrey, P. (2019). The influence of orthography on phonemic knowledge: An experimental investigation on German and Persian. Journal of Psycholinguistic Research, 48(6), 1391-1406. doi:10.1007/s10936-019-09664-9.

    Abstract

    This study investigated whether the phonological representation of a word is modulated by its orthographic representation in case of a mismatch between the two representations. Such a mismatch is found in Persian, where short vowels are represented phonemically but not orthographically. Persian adult literates, Persian adult illiterates, and German adult literates were presented with two auditory tasks, an AX-discrimination task and a reversal task. We assumed that if orthographic representations influence phonological representations, Persian literates should perform worse than Persian illiterates or German literates on items with short vowels in these tasks. The results of the discrimination tasks showed that Persian literates and illiterates as well as German literates were approximately equally competent in discriminating short vowels in Persian words and pseudowords. Persian literates did not well discriminate German words containing phonemes that differed only in vowel length. German literates performed relatively poorly in discriminating German homographic words that differed only in vowel length. Persian illiterates were unable to perform the reversal task in Persian. The results of the other two participant groups in the reversal task showed the predicted poorer performance of Persian literates on Persian items containing short vowels compared to items containing long vowels only. German literates did not show this effect in German. Our results suggest two distinct effects of orthography on phonemic representations: whereas the lack of orthographic representations seems to affect phonemic awareness, homography seems to affect the discriminability of phonemic representations.
  • Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.

    Abstract

    All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association
  • Need, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I. and 13 moreNeed, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I., Hartmann, A. M., Möller, H.-J., Ruppert, A., Fraser, G., Crombie, C., Middleton, L. T., St. Clair, D., Roses, A. D., Muglia, P., Francks, C., Rujescu, D., Meltzer, H. Y., & Goldstein, D. B. (2009). A genome-wide investigation of SNPs and CNVs in schizophrenia. PLoS Genetics, 5(2), e1000373. doi:10.1371/journal.pgen.1000373.

    Abstract

    We report a genome-wide assessment of single nucleotide polymorphisms (SNPs) and copy number variants (CNVs) in schizophrenia. We investigated SNPs using 871 patients and 863 controls, following up the top hits in four independent cohorts comprising 1,460 patients and 12,995 controls, all of European origin. We found no genome-wide significant associations, nor could we provide support for any previously reported candidate gene or genome-wide associations. We went on to examine CNVs using a subset of 1,013 cases and 1,084 controls of European ancestry, and a further set of 60 cases and 64 controls of African ancestry. We found that eight cases and zero controls carried deletions greater than 2 Mb, of which two, at 8p22 and 16p13.11-p12.4, are newly reported here. A further evaluation of 1,378 controls identified no deletions greater than 2 Mb, suggesting a high prior probability of disease involvement when such deletions are observed in cases. We also provide further evidence for some smaller, previously reported, schizophrenia-associated CNVs, such as those in NRXN1 and APBA2. We could not provide strong support for the hypothesis that schizophrenia patients have a significantly greater “load” of large (>100 kb), rare CNVs, nor could we find common CNVs that associate with schizophrenia. Finally, we did not provide support for the suggestion that schizophrenia-associated CNVs may preferentially disrupt genes in neurodevelopmental pathways. Collectively, these analyses provide the first integrated study of SNPs and CNVs in schizophrenia and support the emerging view that rare deleterious variants may be more important in schizophrenia predisposition than common polymorphisms. While our analyses do not suggest that implicated CNVs impinge on particular key pathways, we do support the contribution of specific genomic regions in schizophrenia, presumably due to recurrent mutation. On balance, these data suggest that very few schizophrenia patients share identical genomic causation, potentially complicating efforts to personalize treatment regimens.
  • Newbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V. and 11 moreNewbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V., Watson, J., Ragoussis, J., Fisher, S. E., Seckl, J. R., Helms, P. J., Bolton, P. F., Pickles, A., Conti-Ramsden, G., Baird, G., Bishop, D. V., & Monaco, A. P. (2009). CMIP and ATP2C2 modulate phonological short-term memory in language impairment. American Journal of Human Genetics, 85(2), 264-272. doi:10.1016/j.ajhg.2009.07.004.

    Abstract

    Specific language impairment (SLI) is a common developmental disorder haracterized by difficulties in language acquisition despite otherwise normal development and in the absence of any obvious explanatory factors. We performed a high-density screen of SLI1, a region of chromosome 16q that shows highly significant and consistent linkage to nonword repetition, a measure of phonological short-term memory that is commonly impaired in SLI. Using two independent language-impaired samples, one family-based (211 families) and another selected from a population cohort on the basis of extreme language measures (490 cases), we detected association to two genes in the SLI1 region: that encoding c-maf-inducing protein (CMIP, minP = 5.5 × 10−7 at rs6564903) and that encoding calcium-transporting ATPase, type2C, member2 (ATP2C2, minP = 2.0 × 10−5 at rs11860694). Regression modeling indicated that each of these loci exerts an independent effect upon nonword repetition ability. Despite the consistent findings in language-impaired samples, investigation in a large unselected cohort (n = 3612) did not detect association. We therefore propose that variants in CMIP and ATP2C2 act to modulate phonological short-term memory primarily in the context of language impairment. As such, this investigation supports the hypothesis that some causes of language impairment are distinct from factors that influence normal language variation. This work therefore implicates CMIP and ATP2C2 in the etiology of SLI and provides molecular evidence for the importance of phonological short-term memory in language acquisition.

    Additional information

    mmc1.pdf
  • Newman-Norlund, S. E., Noordzij, M. L., Newman-Norlund, R. D., Volman, I. A., De Ruiter, J. P., Hagoort, P., & Toni, I. (2009). Recipient design in tacit communication. Cognition, 111, 46-54. doi:10.1016/j.cognition.2008.12.004.

    Abstract

    The ability to design tailored messages for specific listeners is an important aspect of
    human communication. The present study investigates whether a mere belief about an
    addressee’s identity influences the generation and production of a communicative message in
    a novel, non-verbal communication task. Participants were made to believe they were playing a game with a child or an adult partner, while a confederate acted as both child
    and adult partners with matched performance and response times. The participants’ belief
    influenced their behavior, spending longer when interacting with the presumed child
    addressee, but only during communicative portions of the game, i.e. using time as a tool
    to place emphasis on target information. This communicative adaptation attenuated with
    experience, and it was related to personality traits, namely Empathy and Need for Cognition
    measures. Overall, these findings indicate that novel nonverbal communicative interactions
    are selected according to a socio-centric perspective, and they are strongly
    influenced by participants’ traits.
  • Niemi, J., Laine, M., & Järvikivi, J. (2009). Paradigmatic and extraparadigmatic morphology in the mental lexicon: Experimental evidence for a dissociation. The mental lexicon, 4(1), 26-40. doi:10.1075/ml.4.1.02nie.

    Abstract

    The present study discusses psycholinguistic evidence for a difference between paradigmatic and extraparadigmatic morphology by investigating the processing of Finnish inflected and cliticized words. The data are derived from three sources of Finnish: from single-word reading performance in an agrammatic deep dyslectic speaker, as well as from visual lexical decision and wordness/learnability ratings of cliticized vs. inflected items by normal Finnish speakers. The agrammatic speaker showed awareness of the suffixes in multimorphemic words, including clitics, since he attempted to fill in this slot with morphological material. However, he never produced a clitic — either as the correct response or as an error — in any morphological configuration (simplex, derived, inflected, compound). Moreover, he produced more nominative singular errors for case-inflected nouns than he did for the cliticized words, a pattern that is expected if case-inflected forms were closely associated with their lexical heads, i.e., if they were paradigmatic and cliticized words were not. Furthermore, a visual lexical decision task with normal speakers of Finnish, showed an additional processing cost (longer latencies and more errors on cliticized than on case-inflected noun forms). Finally, a rating task indicated no difference in relative wordness between these two types of words. However, the same cliticized words were judged harder to learn as L2 items than the inflected words, most probably due to their conceptual/semantic properties, in other words due to their lack of word-level translation equivalents in SAVE languages. Taken together, the present results suggest that the distinction between paradigmatic and extraparadigmatic morphology is psychologically real.
  • Niermann, H. C. M., Tyborowska, A., Cillessen, A. H. N., Van Donkelaar, M. M. J., Lammertink, F., Gunnar, M. R., Franke, B., Figner, B., & Roelofs, K. (2019). The relation between infant freezing and the development of internalizing symptoms in adolescence: A prospective longitudinal study. Developmental Science, 22(3): e12763. doi:10.1111/desc.12763.

    Abstract

    Given the long-lasting detrimental effects of internalizing symptoms, there is great need for detecting early risk markers. One promising marker is freezing behavior. Whereas initial freezing reactions are essential for coping with threat, prolonged freezing has been associated with internalizing psychopathology. However, it remains unknown whether early life alterations in freezing reactions predict changes in internalizing symptoms during adolescent development. In a longitudinal study (N = 116), we tested prospectively whether observed freezing in infancy predicted the development of internalizing symptoms from childhood through late adolescence (until age 17). Both longer and absent infant freezing behavior during a standard challenge (robot-confrontation task) were associated with internalizing symptoms in adolescence. Specifically, absent infant freezing predicted a relative increase in internalizing symptoms consistently across development from relatively low symptom levels in childhood to relatively high levels in late adolescence. Longer infant freezing also predicted a relative increase in internalizing symptoms, but only up until early adolescence. This latter effect was moderated by peer stress and was followed by a later decrease in internalizing symptoms. The findings suggest that early deviations in defensive freezing responses signal risk for internalizing symptoms and may constitute important markers in future stress vulnerability and resilience studies.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Nievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B. and 159 moreNievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Babić, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Børglum, A. D., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas- de- Almeida, J. M., Dale, A. M., Daly, M. J., Daskalakis, N. P., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Dzubur-Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gelaye, B., Geuze, E., Gillespie, C., Uka, A. G., Gordon, S. D., Guffanti, G., Hammamieh, R., Harnal, S., Hauser, M. A., Heath, A. C., Hemmings, S. M. J., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Junglen, A. G., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A. M., Lewis, C. E., Linnstaedt, S. D., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J., Marmar, C., Martin, A. R., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., McLeay, S., Mehta, D., Milberg, W. P., Miller, M. W., Morey, R. A., Morris, C. P., Mors, O., Mortensen, P. B., Neale, B. M., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Ripke, S., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K., Rung, A., Rutten, B. P. F., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Sumner, J. A., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C. H., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Wolff, J. D., Yehuda, R., Young, R. M., Young, K. A., Zhao, H., Zoellner, L. A., Liberzon, I., Ressler, K. J., Haas, M., & Koenen, K. C. (2019). International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nature Communications, 10(1): 4558. doi:10.1038/s41467-019-12576-w.

    Abstract

    The risk of posttraumatic stress disorder (PTSD) following trauma is heritable, but robust common variants have yet to be identified. In a multi-ethnic cohort including over 30,000 PTSD cases and 170,000 controls we conduct a genome-wide association study of PTSD. We demonstrate SNP-based heritability estimates of 5–20%, varying by sex. Three genome-wide significant loci are identified, 2 in European and 1 in African-ancestry analyses. Analyses stratified by sex implicate 3 additional loci in men. Along with other novel genes and non-coding RNAs, a Parkinson’s disease gene involved in dopamine regulation, PARK2, is associated with PTSD. Finally, we demonstrate that polygenic risk for PTSD is significantly predictive of re-experiencing symptoms in the Million Veteran Program dataset, although specific loci did not replicate. These results demonstrate the role of genetic variation in the biology of risk for PTSD and highlight the necessity of conducting sex-stratified analyses and expanding GWAS beyond European ancestry populations.

    Additional information

    Supplementary information
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Noble, C., Sala, G., Peter, M., Lingwood, J., Rowland, C. F., Gobet, F., & Pine, J. (2019). The impact of shared book reading on children's language skills: A meta-analysis. Educational Research Review, 28: 100290. doi:10.1016/j.edurev.2019.100290.

    Abstract

    Shared book reading is thought to have a positive impact on young children's language development, with shared reading interventions often run in an attempt to boost children's language skills. However, despite the volume of research in this area, a number of issues remain outstanding. The current meta-analysis explored whether shared reading interventions are equally effective (a) across a range of study designs; (b) across a range of different outcome variables; and (c) for children from different SES groups. It also explored the potentially moderating effects of intervention duration, child age, use of dialogic reading techniques, person delivering the intervention and mode of intervention delivery.

    Our results show that, while there is an effect of shared reading on language development, this effect is smaller than reported in previous meta-analyses (
     = 0.194, p = .002). They also show that this effect is moderated by the type of control group used and is negligible in studies with active control groups (  = 0.028, p = .703). Finally, they show no significant effects of differences in outcome variable (ps ≥ .286), socio-economic status (p = .658), or any of our other potential moderators (ps ≥ .077), and non-significant effects for studies with follow-ups (  = 0.139, p = .200). On the basis of these results, we make a number of recommendations for researchers and educators about the design and implementation of future shared reading interventions.

    Additional information

    Supplementary data
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.

Share this page