Publications

Displaying 301 - 400 of 603
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1-20. doi:10.1017/S0305000906007689.

    Abstract

    We investigated two main components of infant declarative pointing, reference and attitude, in two experiments with a total of 106 preverbal infants at 1;0. When an experimenter (E) responded to the declarative pointing of these infants by attending to an incorrect referent (with positive attitude), infants repeated pointing within trials to redirect E’s attention, showing an understanding of E’s reference and active message repair. In contrast, when E identified infants’ referent correctly but displayed a disinterested attitude, infants did not repeat pointing within trials and pointed overall in fewer trials, showing an understanding of E’s unenthusiastic attitude about the referent. When E attended to infants’ intended referent AND shared interest in it, infants were most satisfied, showing no message repair within trials and pointing overall in more trials. These results suggest that by twelve months of age infant declarative pointing is a full communicative act aimed at sharing with others both attention to a referent and a specific attitude about that referent.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10(2), F1-F7. doi:0.1111/j.1467-7687.2006.00552.x.

    Abstract

    There is currently controversy over the nature of 1-year-olds' social-cognitive understanding and motives. In this study we investigated whether 12-month-old infants point for others with an understanding of their knowledge states and with a prosocial motive for sharing experiences with them. Declarative pointing was elicited in four conditions created by crossing two factors: an adult partner (1) was already attending to the target event or not, and (2) emoted positively or neutrally. Pointing was also coded after the event had ceased. The findings suggest that 12-month-olds point to inform others of events they do not know about, that they point to share an attitude about mutually attended events others already know about, and that they can point (already prelinguistically) to absent referents. These findings provide strong support for a mentalistic and prosocial interpretation of infants' prelinguistic communication
  • Liszkowski, U., Carpenter, M., Henning, A., Striano, T., & Tomasello, M. (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7(3), 297-307. doi:10.1111/j.1467-7687.2004.00349.x.

    Abstract

    Infants point for various motives. Classically, one such motive is declarative, to share attention and interest with adults to events. Recently, some researchers have questioned whether infants have this motivation. In the current study, an adult reacted to 12-month-olds' pointing in different ways, and infants' responses were observed. Results showed that when the adult shared attention and interest (i.e. alternated gaze and emoted), infants pointed more frequently across trials and tended to prolong each point – presumably to prolong the satisfying interaction. However, when the adult emoted to the infant alone or looked only to the event, infants pointed less across trials and repeated points more within trials – presumably in an attempt to establish joint attention. Results suggest that 12-month-olds point declaratively and understand that others have psychological states that can be directed and shared.
  • Liu, X., Gao, Y., Di, Q., Hu, J., Lu, C., Nan, Y., Booth, J. R., & Liu, L. (2018). Differences between child and adult large-scale functional brain networks for reading tasks. Human Brain Mapping, 39(2), 662-679. doi:10.1002/hbm.23871.

    Abstract

    Reading is an important high‐level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large‐scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19–28 years old) and 16 children (11–13 years old), and graph theoretical analyses to investigate age‐related changes in large‐scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter‐regional connectivity and nodal degree in occipital regions, while children had stronger inter‐regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between‐task differences in inter‐regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults’ reading network is more specialized. (3) Children showed greater inter‐regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain‐general while others may be specific to the reading domain.
  • Xu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E. and 4 moreXu, S., Liu, P., Chen, Y., Chen, Y., Zhang, W., Zhao, H., Cao, Y., Wang, F., Jiang, N., Lin, S., Li, B., Zhang, Z., Wei, Z., Fan, Y., Jin, Y., He, L., Zhou, R., Dekker, J. D., Tucker, H. O., Fisher, S. E., Yao, Z., Liu, Q., Xia, X., & Guo, X. (2018). Foxp2 regulates anatomical features that may be relevant for vocal behaviors and bipedal locomotion. Proceedings of the National Academy of Sciences of the United States of America, 115(35), 8799-8804. doi:10.1073/pnas.1721820115.

    Abstract

    Fundamental human traits, such as language and bipedalism, are associated with a range of anatomical adaptations in craniofacial shaping and skeletal remodeling. However, it is unclear how such morphological features arose during hominin evolution. FOXP2 is a brain-expressed transcription factor implicated in a rare disorder involving speech apraxia and language impairments. Analysis of its evolutionary history suggests that this gene may have contributed to the emergence of proficient spoken language. In the present study, through analyses of skeleton-specific knockout mice, we identified roles of Foxp2 in skull shaping and bone remodeling. Selective ablation of Foxp2 in cartilage disrupted pup vocalizations in a similar way to that of global Foxp2 mutants, which may be due to pleiotropic effects on craniofacial morphogenesis. Our findings also indicate that Foxp2 helps to regulate strength and length of hind limbs and maintenance of joint cartilage and intervertebral discs, which are all anatomical features that are susceptible to adaptations for bipedal locomotion. In light of the known roles of Foxp2 in brain circuits that are important for motor skills and spoken language, we suggest that this gene may have been well placed to contribute to coevolution of neural and anatomical adaptations related to speech and bipedal locomotion.

    Files private

    Request files
  • Long, M., Horton, W. S., Rohde, H., & Sorace, A. (2018). Individual differences in switching and inhibition predict perspective-taking across the lifespan. Cognition, 170, 25-30. doi:10.1016/j.cognition.2017.09.004.

    Abstract

    Studies exploring the influence of executive functions (EF) on perspective-taking have focused on inhibition and working memory in young adults or clinical populations. Less consideration has been given to more complex capacities that also involve switching attention between perspectives, or to changes in EF and concomitant effects on perspective-taking across the lifespan. To address this, we assessed whether individual differences in inhibition and attentional switching in healthy adults (ages 17–84) predict performance on a task in which speakers identified targets for a listener with size-contrasting competitors in common or privileged ground. Modification differences across conditions decreased with age. Further, perspective taking interacted with EF measures: youngest adults’ sensitivity to perspective was best captured by their inhibitory performance; oldest adults’ sensitivity was best captured by switching performance. Perspective-taking likely involves multiple aspects of EF, as revealed by considering a wider range of EF tasks and individual capacities across the lifespan.
  • Loo, S. K., Fisher, S. E., Francks, C., Ogdie, M. N., MacPhie, I. L., Yang, M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2004). Genome-wide scan of reading ability in affected sibling pairs with attention-deficit/hyperactivity disorder: Unique and shared genetic effects. Molecular Psychiatry, 9, 485-493. doi:10.1038/sj.mp.4001450.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) and reading disability (RD) are common highly heritable disorders of childhood, which frequently co-occur. Data from twin and family studies suggest that this overlap is, in part, due to shared genetic underpinnings. Here, we report the first genome-wide linkage analysis of measures of reading ability in children with ADHD, using a sample of 233 affected sibling pairs who previously participated in a genome-wide scan for susceptibility loci in ADHD. Quantitative trait locus (QTL) analysis of a composite reading factor defined from three highly correlated reading measures identified suggestive linkage (multipoint maximum lod score, MLS>2.2) in four chromosomal regions. Two regions (16p, 17q) overlap those implicated by our previous genome-wide scan for ADHD in the same sample: one region (2p) provides replication for an RD susceptibility locus, and one region (10q) falls approximately 35 cM from a modestly highlighted region in an independent genome-wide scan of siblings with ADHD. Investigation of an individual reading measure of Reading Recognition supported linkage to putative RD susceptibility regions on chromosome 8p (MLS=2.4) and 15q (MLS=1.38). Thus, the data support the existence of genetic factors that have pleiotropic effects on ADHD and reading ability--as suggested by shared linkages on 16p, 17q and possibly 10q--but also those that appear to be unique to reading--as indicated by linkages on 2p, 8p and 15q that coincide with those previously found in studies of RD. Our study also suggests that reading measures may represent useful phenotypes in ADHD research. The eventual identification of genes underlying these unique and shared linkages may increase our understanding of ADHD, RD and the relationship between the two.
  • Lumaca, M., Ravignani, A., & Baggio, G. (2018). Music evolution in the laboratory: Cultural transmission meets neurophysiology. Frontiers in Neuroscience, 12: 246. doi:10.3389%2Ffnins.2018.00246.

    Abstract

    In recent years, there has been renewed interest in the biological and cultural evolution of music, and specifically in the role played by perceptual and cognitive factors in shaping core features of musical systems, such as melody, harmony, and rhythm. One proposal originates in the language sciences. It holds that aspects of musical systems evolve by adapting gradually, in the course of successive generations, to the structural and functional characteristics of the sensory and memory systems of learners and “users” of music. This hypothesis has found initial support in laboratory experiments on music transmission. In this article, we first review some of the most important theoretical and empirical contributions to the field of music evolution. Next, we identify a major current limitation of these studies, i.e., the lack of direct neural support for the hypothesis of cognitive adaptation. Finally, we discuss a recent experiment in which this issue was addressed by using event-related potentials (ERPs). We suggest that the introduction of neurophysiology in cultural transmission research may provide novel insights on the micro-evolutionary origins of forms of variation observed in cultural systems.
  • Lutzenberger, H. (2018). Manual and nonmanual features of name signs in Kata Kolok and sign language of the Netherlands. Sign Language Studies, 18(4), 546-569. doi:10.1353/sls.2018.0016.

    Abstract

    Name signs are based on descriptions, initialization, and loan translations. Nyst and Baker (2003) have found crosslinguistic similarities in the phonology of name signs, such as a preference for one-handed signs and for the head location. Studying Kata Kolok (KK), a rural sign language without indigenous fingerspelling, strongly suggests that one-handedness is not correlated to initialization, but represents a more general feature of name sign phonology. Like in other sign languages, the head location is used frequently in both KK and Sign Language of the Netherlands (NGT) name signs. The use of nonmanuals, however, is strikingly different. NGT name signs are always accompanied by mouthings, which are absent in KK. Instead, KK name signs may use mouth gestures; these may disambiguate manually identical name signs, and even form independent name signs without any manual features
  • Magyari, L. (2004). Nyelv és/vagy evolúció? [Book review]. Magyar Pszichológiai Szemle, 59(4), 591-607. doi:10.1556/MPSzle.59.2004.4.7.

    Abstract

    Nyelv és/vagy evolúció: Lehetséges-e a nyelv evolúciós magyarázata? [Derek Bickerton: Nyelv és evolúció] (Magyari Lilla); Történelmi olvasókönyv az agyról [Charles G. Gross: Agy, látás, emlékezet. Mesék az idegtudomány történetéből] (Garab Edit Anna); Művészet vagy tudomány [Margitay Tihamér: Az érvelés mestersége. Érvelések elemzése, értékelése és kritikája] (Zemplén Gábor); Tényleg ésszerűek vagyunk? [Herbert Simon: Az ésszerűség szerepe az emberi életben] (Kardos Péter); Nemi különbségek a megismerésben [Doreen Kimura: Női agy, férfi agy]. (Hahn Noémi);
  • Majid, A. (2004). Out of context. The Psychologist, 17(6), 330-330.
  • Majid, A., Bowerman, M., Van Staden, M., & Boster, J. S. (2007). The semantic categories of cutting and breaking events: A crosslinguistic perspective. Cognitive Linguistics, 18(2), 133-152. doi:10.1515/COG.2007.005.

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how...
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2007). The linguistic description of minimal social scenarios affects the extent of causal inference making. Journal of Experimental Social Psychology, 43(6), 918-932. doi:10.1016/j.jesp.2006.10.016.

    Abstract

    There is little consensus regarding the circumstances in which people spontaneously generate causal inferences, and in particular whether they generate inferences about the causal antecedents or the causal consequences of events. We tested whether people systematically infer causal antecedents or causal consequences to minimal social scenarios by using a continuation methodology. People overwhelmingly produced causal antecedent continuations for descriptions of interpersonal events (John hugged Mary), but causal consequence continuations to descriptions of transfer events (John gave a book to Mary). This demonstrates that there is no global cognitive style, but rather inference generation is crucially tied to the input. Further studies examined the role of event unusualness, number of participators, and verb-type on the likelihood of producing a causal antecedent or causal consequence inference. We conclude that inferences are critically guided by the specific verb used.
  • Majid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P. and 6 moreMajid, A., Roberts, S. G., Cilissen, L., Emmorey, K., Nicodemus, B., O'Grady, L., Woll, B., LeLan, B., De Sousa, H., Cansler, B. L., Shayan, S., De Vos, C., Senft, G., Enfield, N. J., Razak, R. A., Fedden, S., Tufvesson, S., Dingemanse, M., Ozturk, O., Brown, P., Hill, C., Le Guen, O., Hirtzel, V., Van Gijn, R., Sicoli, M. A., & Levinson, S. C. (2018). Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11369-11376. doi:10.1073/pnas.1720419115.

    Abstract

    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A. (2004). Data elicitation methods. Language Archive Newsletter, 1(2), 6-6.
  • Majid, A. (2004). Developing clinical understanding. The Psychologist, 17, 386-387.
  • Majid, A. (2004). Coned to perfection. The Psychologist, 17(7), 386-386.
  • Majid, A., Bowerman, M., Kita, S., Haun, D. B. M., & Levinson, S. C. (2004). Can language restructure cognition? The case for space. Trends in Cognitive Sciences, 8(3), 108-114. doi:10.1016/j.tics.2004.01.003.

    Abstract

    Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies crossculturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains.
  • Majid, A. (2004). An integrated view of cognition [Review of the book Rethinking implicit memory ed. by J. S. Bowers and C. J. Marsolek]. The Psychologist, 17(3), 148-149.
  • Majid, A. (2004). [Review of the book The new handbook of language and social psychology ed. by W. Peter Robinson and Howard Giles]. Language and Society, 33(3), 429-433.
  • Majid, A., Gullberg, M., Van Staden, M., & Bowerman, M. (2007). How similar are semantic categories in closely related languages? A comparison of cutting and breaking in four Germanic languages. Cognitive Linguistics, 18(2), 179-194. doi:10.1515/COG.2007.007.

    Abstract

    Are the semantic categories of very closely related languages the same? We present a new methodology for addressing this question. Speakers of English, German, Dutch and Swedish described a set of video clips depicting cutting and breaking events. The verbs elicited were then subjected to cluster analysis, which groups scenes together based on similarity (determined by shared verbs). Using this technique, we find that there are surprising differences among the languages in the number of categories, their exact boundaries, and the relationship of the terms to one another[--]all of which is circumscribed by a common semantic space.
  • Majid, A. (2018). Humans are neglecting our sense of smell. Here's what we could gain by fixing that. Time, March 7, 2018: 5130634.
  • Majid, A., & Kruspe, N. (2018). Hunter-gatherer olfaction is special. Current Biology, 28(3), 409-413. doi:10.1016/j.cub.2017.12.014.

    Abstract

    People struggle to name odors, but this
    limitation is not universal. Majid and
    Kruspe investigate whether superior
    olfactory performance is due to
    subsistence, ecology, or language family.
    By comparing closely related
    communities in the Malay Peninsula, they
    find that only hunter-gatherers are
    proficient odor namers, suggesting that
    subsistence is crucial.

    Additional information

    The data are archived at RWAAI
  • Majid, A., Burenhult, N., Stensmyr, M., De Valk, J., & Hansson, B. S. (2018). Olfactory language and abstraction across cultures. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 373: 20170139. doi:10.1098/rstb.2017.0139.

    Abstract

    Olfaction presents a particularly interesting arena to explore abstraction in language. Like other abstract domains, such as time, odours can be difficult to conceptualize. An odour cannot be seen or held, it can be difficult to locate in space, and for most people odours are difficult to verbalize. On the other hand, odours give rise to primary sensory experiences. Every time we inhale we are using olfaction to make sense of our environment. We present new experimental data from 30 Jahai hunter-gatherers from the Malay Peninsula and 30 matched Dutch participants from the Netherlands in an odour naming experiment. Participants smelled monomolecular odorants and named odours while reaction times, odour descriptors and facial expressions were measured. We show that while Dutch speakers relied on concrete descriptors, i.e. they referred to odour sources (e.g. smells like lemon), the Jahai used abstract vocabulary to name the same odours (e.g. musty). Despite this differential linguistic categorization, analysis of facial expressions showed that the two groups, nevertheless, had the same initial emotional reactions to odours. Critically, these cross-cultural data present a challenge for how to think about abstraction in language.
  • Mamus, E., & Boduroglu, A. (2018). The role of context on boundary extension. Visual Cognition, 26(2), 115-130. doi:10.1080/13506285.2017.1399947.

    Abstract

    Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.
  • Manahova, M. E., Mostert, P., Kok, P., Schoffelen, J.-M., & De Lange, F. P. (2018). Stimulus familiarity and expectation jointly modulate neural activity in the visual ventral stream. Journal of Cognitive Neuroscience, 30(9), 1366-1377. doi:10.1162/jocn_a_01281.

    Abstract

    Prior knowledge about the visual world can change how a visual stimulus is processed. Two forms of prior knowledge are often distinguished: stimulus familiarity (i.e., whether a stimulus has been seen before) and stimulus expectation (i.e., whether a stimulus is expected to occur, based on the context). Neurophysiological studies in monkeys have shown suppression of spiking activity both for expected and for familiar items in object-selective inferotemporal cortex. It is an open question, however, if and how these types of knowledge interact in their modulatory effects on the sensory response. To address this issue and to examine whether previous findings generalize to noninvasively measured neural activity in humans, we separately manipulated stimulus familiarity and expectation while noninvasively recording human brain activity using magnetoencephalography. We observed independent suppression of neural activity by familiarity and expectation, specifically in the lateral occipital complex, the putative human homologue of monkey inferotemporal cortex. Familiarity also led to sharpened response dynamics, which was predominantly observed in early visual cortex. Together, these results show that distinct types of sensory knowledge jointly determine the amount of neural resources dedicated to object processing in the visual ventral stream.
  • Mandy, W., Pellicano, L., St Pourcain, B., Skuse, D., & Heron, J. (2018). The development of autistic social traits across childhood and adolescence in males and females. The Journal of Child Psychology and Psychiatry, 59(11), 1143-1151. doi:10.1111/jcpp.12913.

    Abstract

    Background

    Autism is a dimensional condition, representing the extreme end of a continuum of social competence that extends throughout the general population. Currently, little is known about how autistic social traits (ASTs), measured across the full spectrum of severity, develop during childhood and adolescence, including whether there are developmental differences between boys and girls. Therefore, we sought to chart the trajectories of ASTs in the general population across childhood and adolescence, with a focus on gender differences.
    Methods

    Participants were 9,744 males (n = 4,784) and females (n = 4,960) from ALSPAC, a UK birth cohort study. ASTs were assessed when participants were aged 7, 10, 13 and 16 years, using the parent‐report Social Communication Disorders Checklist. Data were modelled using latent growth curve analysis.
    Results

    Developmental trajectories of males and females were nonlinear, showing a decline from 7 to 10 years, followed by an increase between 10 and 16 years. At 7 years, males had higher levels of ASTs than females (mean raw score difference = 0.88, 95% CI [.72, 1.04]), and were more likely (odds ratio [OR] = 1.99; 95% CI, 1.82, 2.16) to score in the clinical range on the SCDC. By 16 years this gender difference had disappeared: males and females had, on average, similar levels of ASTs (mean difference = 0.00, 95% CI [−0.19, 0.19]) and were equally likely to score in the SCDC's clinical range (OR = 0.91, 95% CI, 0.73, 1.10). This was the result of an increase in females’ ASTs between 10 and 16 years.
    Conclusions

    There are gender‐specific trajectories of autistic social impairment, with females more likely than males to experience an escalation of ASTs during early‐ and midadolescence. It remains to be discovered whether the observed female adolescent increase in ASTs represents the genuine late onset of social difficulties or earlier, subtle, pre‐existing difficulties becoming more obvious.

    Additional information

    jcpp12913-sup-0001-supinfo.docx
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L., Heritage, J., & McGlynn, E. A. (2004). Racial/ethnic variation in parent expectations for antibiotics: Implications for public health campaigns. Pediatrics, 113(5), 385-394.
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • Martin, A. E. (2018). Cue integration during sentence comprehension: Electrophysiological evidence from ellipsis. PLoS One, 13(11): e0206616. doi:10.1371/journal.pone.0206616.

    Abstract

    Language processing requires us to integrate incoming linguistic representations with representations of past input, often across intervening words and phrases. This computational situation has been argued to require retrieval of the appropriate representations from memory via a set of features or representations serving as retrieval cues. However, even within in a cue-based retrieval account of language comprehension, both the structure of retrieval cues and the particular computation that underlies direct-access retrieval are still underspecified. Evidence from two event-related brain potential (ERP) experiments that show cue-based interference from different types of linguistic representations during ellipsis comprehension are consistent with an architecture wherein different cue types are integrated, and where the interaction of cue with the recent contents of memory determines processing outcome, including expression of the interference effect in ERP componentry. I conclude that retrieval likely includes a computation where cues are integrated with the contents of memory via a linear weighting scheme, and I propose vector addition as a candidate formalization of this computation. I attempt to account for these effects and other related phenomena within a broader cue-based framework of language processing.
  • Martin, A. E., & McElree, B. (2018). Retrieval cues and syntactic ambiguity resolution: Speed-accuracy tradeoff evidence. Language, Cognition and Neuroscience, 33(6), 769-783. doi:10.1080/23273798.2018.1427877.

    Abstract

    Language comprehension involves coping with ambiguity and recovering from misanalysis. Syntactic ambiguity resolution is associated with increased reading times, a classic finding that has shaped theories of sentence processing. However, reaction times conflate the time it takes a process to complete with the quality of the behavior-related information available to the system. We therefore used the speed-accuracy tradeoff procedure (SAT) to derive orthogonal estimates of processing time and interpretation accuracy, and tested whether stronger retrieval cues (via semantic relatedness: neighed->horse vs. fell->horse) aid interpretation during recovery. On average, ambiguous sentences took 250ms longer (SAT rate) to interpret than unambiguous controls, demonstrating veridical differences in processing time. Retrieval cues more strongly related to the true subject always increased accuracy, regardless of ambiguity. These findings are consistent with a language processing architecture where cue-driven operations give rise to interpretation, and wherein diagnostic cues aid retrieval, regardless of parsing difficulty or structural uncertainty.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Listening to yourself is special: Evidence from global speech rate tracking. PLoS One, 13(9): e0203571. doi:10.1371/journal.pone.0203571.

    Abstract

    Listeners are known to use adjacent contextual speech rate in processing temporally ambiguous speech sounds. For instance, an ambiguous vowel between short /A/ and long /a:/ in Dutch sounds relatively long (i.e., as /a:/) embedded in a fast precursor sentence, but short in a slow sentence. Besides the local speech rate, listeners also track talker-specific global speech rates. However, it is yet unclear whether other talkers' global rates are encoded with reference to a listener's self-produced rate. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly. The groups were compared on their perception of ambiguous /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech and again evaluated target vowels in neutral rate speech. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 but with a new participant sample that was unfamiliar with the participants from Experiment 2. This experiment revealed fewer /a:/ responses in neutral speech in the group also listening to a fast rate, suggesting that neutral speech sounds slow in the presence of a fast talker and vice versa. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception in dialogue settings.
  • McQueen, J. M., & Viebahn, M. C. (2007). Tracking recognition of spoken words by tracking looks to printed words. Quarterly Journal of Experimental Psychology, 60(5), 661-671. doi:10.1080/17470210601183890.

    Abstract

    Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.

    Abstract

    This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time.
  • Mei, C., Fedorenko, E., Amor, D. J., Boys, A., Hoeflin, C., Carew, P., Burgess, T., Fisher, S. E., & Morgan, A. T. (2018). Deep phenotyping of speech and language skills in individuals with 16p11.2 deletion. European journal of human genetics, 26(5), 676-686. doi:10.1038/s41431-018-0102-x.

    Abstract

    Recurrent deletions of a ~600-kb region of 16p11.2 have been associated with a highly penetrant form of childhood apraxia of speech (CAS). Yet prior findings have been based on a small, potentially biased sample using retrospectively collected data. We examine the prevalence of CAS in a larger cohort of individuals with 16p11.2 deletion using a prospectively designed assessment battery. The broader speech and language phenotype associated with carrying this deletion was also examined. 55 participants with 16p11.2 deletion (47 children, 8 adults) underwent deep phenotyping to test for the presence of CAS and other speech and language diagnoses. Standardized tests of oral motor functioning, speech production, language, and non-verbal IQ were conducted. The majority of children (77%) and half of adults (50%) met criteria for CAS. Other speech outcomes were observed including articulation or phonological errors (i.e., phonetic and cognitive-linguistic errors, respectively), dysarthria (i.e., neuromuscular speech disorder), minimal verbal output, and even typical speech in some. Receptive and expressive language impairment was present in 73% and 70% of children, respectively. Co-occurring neurodevelopmental conditions (e.g., autism) and non-verbal IQ did not correlate with the presence of CAS. Findings indicate that CAS is highly prevalent in children with 16p11.2 deletion with symptoms persisting into adulthood for many. Yet CAS occurs in the context of a broader speech and language profile and other neurobehavioral deficits. Further research will elucidate specific genetic and neural pathways leading to speech and language deficits in individuals with 16p11.2 deletions, resulting in more targeted speech therapies addressing etiological pathways.
  • Melinger, A., & Levelt, W. J. M. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119-141.

    Abstract

    This paper aims to determine whether iconic tracing gestures produced while speaking constitute part of the speaker’s communicative intention. We used a picture description task in which speakers must communicate the spatial and color information of each picture to an interlocutor. By establishing the necessary minimal content of an intended message, we determined whether speech produced with concurrent gestures is less explicit than speech without gestures. We argue that a gesture must be communicatively intended if it expresses necessary information that was nevertheless omitted from speech. We found that speakers who produced iconic gestures representing spatial relations omitted more required spatial information from their descriptions than speakers who did not gesture. These results provide evidence that speakers intend these gestures to communicate. The results have implications for the cognitive architectures that underlie the production of gesture and speech.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Meulenbroek, O., Petersson, K. M., Voermans, N., Weber, B., & Fernández, G. (2004). Age differences in neural correlates of route encoding and route recognition. Neuroimage, 22, 1503-1514. doi:10.1016/j.neuroimage.2004.04.007.

    Abstract

    Spatial memory deficits are core features of aging-related changes in cognitive abilities. The neural correlates of these deficits are largely unknown. In the present study, we investigated the neural underpinnings of age-related differences in spatial memory by functional MRI using a navigational memory task with route encoding and route recognition conditions. We investigated 20 healthy young (18 - 29 years old) and 20 healthy old adults (53 - 78 years old) in a random effects analysis. Old subjects showed slightly poorer performance than young subjects. Compared to the control condition, route encoding and route recognition showed activation of the dorsal and ventral visual processing streams and the frontal eye fields in both groups of subjects. Compared to old adults, young subjects showed during route encoding stronger activations in the dorsal and the ventral visual processing stream (supramarginal gyrus and posterior fusiform/parahippocampal areas). In addition, young subjects showed weaker anterior parahippocampal activity during route recognition compared to the old group. In contrast, old compared to young subjects showed less suppressed activity in the left perisylvian region and the anterior cingulate cortex during route encoding. Our findings suggest that agerelated navigational memory deficits might be caused by less effective route encoding based on reduced posterior fusiform/parahippocampal and parietal functionality combined with diminished inhibition of perisylvian and anterior cingulate cortices correlated with less effective suppression of task-irrelevant information. In contrast, age differences in neural correlates of route recognition seem to be rather subtle. Old subjects might show a diminished familiarity signal during route recognition in the anterior parahippocampal region.
  • Meyer, A. S., & Damian, M. F. (2007). Activation of distractor names in the picture-picture interference paradigm. Memory & Cognition, 35, 494-503.

    Abstract

    In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access—that is, under the assumption that the recognition of an object leads to the activation of its name.
  • Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14, 710-716.

    Abstract

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.
  • Meyer, A. S., Van der Meulen, F. F., & Brooks, A. (2004). Eye movements during speech planning: Talking about present and remembered objects. Visual Cognition, 11, 553-576. doi:10.1080/13506280344000248.

    Abstract

    Earlier work has shown that speakers naming several objects usually look at each of them before naming them (e.g., Meyer, Sleiderink, & Levelt, 1998). In the present study, participants saw pictures and described them in utterances such as "The chair next to the cross is brown", where the colour of the first object was mentioned after another object had been mentioned. In Experiment 1, we examined whether the speakers would look at the first object (the chair) only once, before naming the object, or twice (before naming the object and before naming its colour). In Experiment 2, we examined whether speakers about to name the colour of the object would look at the object region again when the colour or the entire object had been removed while they were looking elsewhere. We found that speakers usually looked at the target object again before naming its colour, even when the colour was not displayed any more. Speakers were much less likely to fixate upon the target region when the object had been removed from view. We propose that the object contours may serve as a memory cue supporting the retrieval of the associated colour information. The results show that a speaker's eye movements in a picture description task, far from being random, depend on the available visual information and the content and structure of the planned utterance.
  • Meyer, A. S., & Schriefers, H. (1991). Phonological facilitation in picture-word interference experiments: Effects of stimulus onset asynchrony and types of interfering stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 1146-1160. doi:10.1037/0278-7393.17.6.1146.

    Abstract

    Subjects named pictures while hearing distractor words that shared word-initial or word-final segments with the picture names or were unrelated to the picture names. The relative timing of distractor and picture presentation was varied. Compared with unrelated distractors, both types of related distractors facilitated picture naming under certain timing conditions. Begin-related distractors facilitated the naming responses if the shared segments began 150 ms before, at, or 150 ms after picture onset. By contrast, end-related distractors only facilitated the responses if the shared segments began at or 150 ms after picture onset. The results suggest that the phonological encoding of the beginning of a word is initiated before the encoding of its end.
  • Meyer, A. S. (1991). The time course of phonological encoding in language production: Phonological encoding inside a syllable. Journal of Memory and Language, 30, 69-69. doi:10.1016/0749-596X(91)90011-8.

    Abstract

    Eight experiments were carried out investigating whether different parts of a syllable must be phonologically encoded in a specific order or whether they can be encoded in any order. A speech production task was used in which the subjects in each test trial had to utter one out of three or five response words as quickly as possible. In the so-called homogeneous condition these words were related in form, while in the heterogeneous condition they were unrelated in form. For monosyllabic response words shorter reaction times were obtained in the homogeneous than in the heterogeneous condition when the words had the same onset, but not when they had the same rhyme. Similarly, for disyllabic response words, the reaction times were shorter in the homogeneous than in the heterogeneous condition when the words shared only the onset of the first syllable, but not when they shared only its rhyme. Furthermore, a stronger facilitatory effect was observed when the words had the entire first syllable in common than when they only shared the onset, or the onset and the nucleus, but not the coda of the first syllable. These results suggest that syllables are phonologically encoded in two ordered steps, the first of which is dedicated to the onset and the second to the rhyme.
  • Meyer, A. S., Alday, P. M., Decuyper, C., & Knudsen, B. (2018). Working together: Contributions of corpus analyses and experimental psycholinguistics to understanding conversation. Frontiers in Psychology, 9: 525. doi:10.3389/fpsyg.2018.00525.

    Abstract

    As conversation is the most important way of using language, linguists and psychologists should combine forces to investigate how interlocutors deal with the cognitive demands arising during conversation. Linguistic analyses of corpora of conversation are needed to understand the structure of conversations, and experimental work is indispensable for understanding the underlying cognitive processes. We argue that joint consideration of corpus and experimental data is most informative when the utterances elicited in a lab experiment match those extracted from a corpus in relevant ways. This requirement to compare like with like seems obvious but is not trivial to achieve. To illustrate this approach, we report two experiments where responses to polar (yes/no) questions were elicited in the lab and the response latencies were compared to gaps between polar questions and answers in a corpus of conversational speech. We found, as expected, that responses were given faster when they were easy to plan and planning could be initiated earlier than when they were harder to plan and planning was initiated later. Overall, in all but one condition, the latencies were longer than one would expect based on the analyses of corpus data. We discuss the implication of this partial match between the data sets and more generally how corpus and experimental data can best be combined in studies of conversation.

    Additional information

    Data_Sheet_1.pdf
  • Meyer, A. S., Belke, E., Häcker, C., & Mortensen, L. (2007). Use of word length information in utterance planning. Journal of Memory and Language, 57, 210-231. doi:10.1016/j.jml.2006.10.005.

    Abstract

    Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment I of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
  • Mitterer, H., Reinisch, E., & McQueen, J. M. (2018). Allophones, not phonemes in spoken-word recognition. Journal of Memory and Language, 98, 77-92. doi:10.1016/j.jml.2017.09.005.

    Abstract

    What are the phonological representations that listeners use to map information about the segmental content of speech onto the mental lexicon during spoken-word recognition? Recent evidence from perceptual-learning paradigms seems to support (context-dependent) allophones as the basic representational units in spoken-word recognition. But recent evidence from a selective-adaptation paradigm seems to suggest that context-independent phonemes also play a role. We present three experiments using selective adaptation that constitute strong tests of these representational hypotheses. In Experiment 1, we tested generalization of selective adaptation using different allophones of Dutch /r/ and /l/ – a case where generalization has not been found with perceptual learning. In Experiments 2 and 3, we tested generalization of selective adaptation using German back fricatives in which allophonic and phonemic identity were varied orthogonally. In all three experiments, selective adaptation was observed only if adaptors and test stimuli shared allophones. Phonemic identity, in contrast, was neither necessary nor sufficient for generalization of selective adaptation to occur. These findings and other recent data using the perceptual-learning paradigm suggest that pre-lexical processing during spoken-word recognition is based on allophones, and not on context-independent phonemes
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Monster, I., & Lev-Ari, S. (2018). The effect of social network size on hashtag adoption on Twitter. Cognitive Science, 42(8), 3149-3158. doi:10.1111/cogs.12675.

    Abstract

    Propagation of novel linguistic terms is an important aspect of language use and language
    change. Here, we test how social network size influences people’s likelihood of adopting novel
    labels by examining hashtag use on Twitter. Specifically, we test whether following fewer Twitter
    users leads to more varied and malleable hashtag use on Twitter , because each followed user is
    ascribed greater weight and thus exerts greater influence on the following user. Focusing on Dutch
    users tweeting about the terrorist attack in Brussels in 2016, we show that people who follow
    fewer other users use a larger number of unique hashtags to refer to the event, reflecting greater
    malleability and variability in use. These results have implications for theories of language learning, language use, and language change.
  • Morgan, A. T., van Haaften, L., van Hulst, K., Edley, C., Mei, C., Tan, T. Y., Amor, D., Fisher, S. E., & Koolen, D. A. (2018). Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia. European journal of human genetics, 26, 75-84. doi:10.1038/s41431-017-0035-9.

    Abstract

    Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0–27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5–3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding ‘double hit’ of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.

    Additional information

    41431_2017_35_MOESM1_ESM.docx
  • Moscoso del Prado Martín, F., Kostic, A., & Baayen, R. H. (2004). Putting the bits together: An information theoretical perspective on morphological processing. Cognition, 94(1), 1-18. doi:10.1016/j.cognition.2003.10.015.

    Abstract

    In this study we introduce an information-theoretical formulation of the emergence of type- and token-based effects in morphological processing. We describe a probabilistic measure of the informational complexity of a word, its information residual, which encompasses the combined influences of the amount of information contained by the target word and the amount of information carried by its nested morphological paradigms. By means of re-analyses of previously published data on Dutch words we show that the information residual outperforms the combination of traditional token- and type-based counts in predicting response latencies in visual lexical decision, and at the same time provides a parsimonious account of inflectional, derivational, and compounding processes.
  • Moscoso del Prado Martín, F., Ernestus, M., & Baayen, R. H. (2004). Do type and token effects reflect different mechanisms? Connectionist modeling of Dutch past-tense formation and final devoicing. Brain and Language, 90(1-3), 287-298. doi:10.1016/j.bandl.2003.12.002.

    Abstract

    In this paper, we show that both token and type-based effects in lexical processing can result from a single, token-based, system, and therefore, do not necessarily reflect different levels of processing. We report three Simple Recurrent Networks modeling Dutch past-tense formation. These networks show token-based frequency effects and type-based analogical effects closely matching the behavior of human participants when producing past-tense forms for both existing verbs and pseudo-verbs. The third network covers the full vocabulary of Dutch, without imposing predefined linguistic structure on the input or output words.
  • Moscoso del Prado Martín, F., Bertram, R., Haikio, T., Schreuder, R., & Baayen, R. H. (2004). Morphological family size in a morphologically rich language: The case of Finnish compared to Dutch and Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 30(6), 1271-1278. doi:10.1037/0278-7393.30.6.1271.

    Abstract

    Finnish has a very productive morphology in which a stem can give rise to several thousand words. This study presents a visual lexical decision experiment addressing the processing consequences of the huge productivity of Finnish morphology. The authors observed that in Finnish words with larger morphological families elicited shorter response latencies. However, in contrast to Dutch and Hebrew, it is not the complete morphological family of a complex Finnish word that codetermines response latencies but only the subset of words directly derived from the complex word itself. Comparisons with parallel experiments using translation equivalents in Dutch and Hebrew showed substantial cross-language predictivity of family size between Finnish and Dutch but not between Finnish and Hebrew, reflecting the different ways in which the Hebrew and Finnish morphological systems contribute to the semantic organization of concepts in the mental lexicon.
  • Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., & De Lange, F. P. (2018). Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro, 5(4): ENEURO.0401-17.2018. doi:10.1523/ENEURO.0401-17.2018.

    Abstract

    A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
  • Mulder, K., Van Heuven, W. J., & Dijkstra, T. (2018). Revisiting the neighborhood: How L2 proficiency and neighborhood manipulation affect bilingual processing. Frontiers in Psychology, 9: 1860. doi:10.3389/fpsyg.2018.01860.

    Abstract

    We conducted three neighborhood experiments with Dutch-English bilinguals to test effects of L2 proficiency and neighborhood characteristics within and between languages. In the past 20 years, the English (L2) proficiency of this population has considerably increased. To consider the impact of this development on neighborhood effects, we conducted a strict replication of the English lexical decision task by van Heuven, Dijkstra, & Grainger (1998, Exp. 4). In line with our prediction, English characteristics (neighborhood size, word and bigram frequency) dominated the word and nonword responses, while the nonwords also revealed an interaction of English and Dutch neighborhood size.
    The prominence of English was tested again in two experiments introducing a stronger neighborhood manipulation. In English lexical decision and progressive demasking, English items with no orthographic neighbors at all were contrasted with items having neighbors in English or Dutch (‘hermits’) only, or in both languages. In both tasks, target processing was affected strongly by the presence of English neighbors, but only weakly by Dutch neighbors. Effects are interpreted in terms of two underlying processing mechanisms: language-specific global lexical activation and lexical competition.
  • Mulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S. and 31 moreMulhern, M. S., Stumpel, C., Stong, N., Brunner, H. G., Bier, L., Lippa, N., Riviello, J., Rouhl, R. P. W., Kempers, M., Pfundt, R., Stegmann, A. P. A., Kukolich, M. K., Telegrafi, A., Lehman, A., Lopez-Rangel, E., Houcinat, N., Barth, M., Den Hollander, N., Hoffer, M. J. V., Weckhuysen, S., Roovers, J., Djemie, T., Barca, D., Ceulemans, B., Craiu, D., Lemke, J. R., Korff, C., Mefford, H. C., Meyers, C. T., Siegler, Z., Hiatt, S. M., Cooper, G. M., Bebin, E. M., Snijders Blok, L., Veenstra-Knol, H. E., Baugh, E. H., Brilstra, E. H., Volker-Touw, C. M. L., Van Binsbergen, E., Revah-Politi, A., Pereira, E., McBrian, D., Pacault, M., Isidor, B., Le Caignec, C., Gilbert-Dussardier, B., Bilan, F., Heinzen, E. L., Goldstein, D. B., Stevens, S. J. C., & Sands, T. T. (2018). NBEA: Developmental disease gene with early generalized epilepsy phenotypes. Annals of Neurology, 84(5), 788-795. doi:10.1002/ana.25350.

    Abstract

    NBEA is a candidate gene for autism, and de novo variants have been reported in neurodevelopmental disease (NDD) cohorts. However, NBEA has not been rigorously evaluated as a disease gene, and associated phenotypes have not been delineated. We identified 24 de novo NBEA variants in patients with NDD, establishing NBEA as an NDD gene. Most patients had epilepsy with onset in the first few years of life, often characterized by generalized seizure types, including myoclonic and atonic seizures. Our data show a broader phenotypic spectrum than previously described, including a myoclonic-astatic epilepsy–like phenotype in a subset of patients.

    Files private

    Request files
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Narasimhan, B., Sproat, R., & Kiraz, G. (2004). Schwa-deletion in Hindi text-to-speech synthesis. International Journal of Speech Technology, 7(4), 319-333. doi:10.1023/B:IJST.0000037075.71599.62.

    Abstract

    We describe the phenomenon of schwa-deletion in Hindi and how it is handled in the pronunciation component of a multilingual concatenative text-to-speech system. Each of the consonants in written Hindi is associated with an “inherent” schwa vowel which is not represented in the orthography. For instance, the Hindi word pronounced as [namak] (’salt’) is represented in the orthography using the consonantal characters for [n], [m], and [k]. Two main factors complicate the issue of schwa pronunciation in Hindi. First, not every schwa following a consonant is pronounced within the word. Second, in multimorphemic words, the presence of a morpheme boundary can block schwa deletion where it might otherwise occur. We propose a model for schwa-deletion which combines a general purpose schwa-deletion rule proposed in the linguistics literature (Ohala, 1983), with additional morphological analysis necessitated by the high frequency of compounds in our database. The system is implemented in the framework of finite-state transducer technology.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Newbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A. and 9 moreNewbury, D. F., Cleak, J. D., Banfield, E., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Slonims, V., Baird, G., Bolton, P., Everitt, A., Hennessy, E., Main, M., Helms, P., Kindley, A. D., Hodson, A., Watson, J., O’Hare, A., Cohen, W., Cowie, H., Steel, J., MacLean, A., Seckl, J., Bishop, D. V. M., Simkin, Z., Conti-Ramsden, G., & Pickles, A. (2004). Highly significant linkage to the SLI1 Locus in an expanded sample of Individuals affected by specific language impairment. American Journal of Human Genetics, 74(6), 1225-1238. doi:10.1086/421529.

    Abstract

    Specific language impairment (SLI) is defined as an unexplained failure to acquire normal language skills despite adequate intelligence and opportunity. We have reported elsewhere a full-genome scan in 98 nuclear families affected by this disorder, with the use of three quantitative traits of language ability (the expressive and receptive tests of the Clinical Evaluation of Language Fundamentals and a test of nonsense word repetition). This screen implicated two quantitative trait loci, one on chromosome 16q (SLI1) and a second on chromosome 19q (SLI2). However, a second independent genome screen performed by another group, with the use of parametric linkage analyses in extended pedigrees, found little evidence for the involvement of either of these regions in SLI. To investigate these loci further, we have collected a second sample, consisting of 86 families (367 individuals, 174 independent sib pairs), all with probands whose language skills are ⩾1.5 SD below the mean for their age. Haseman-Elston linkage analysis resulted in a maximum LOD score (MLS) of 2.84 on chromosome 16 and an MLS of 2.31 on chromosome 19, both of which represent significant linkage at the 2% level. Amalgamation of the wave 2 sample with the cohort used for the genome screen generated a total of 184 families (840 individuals, 393 independent sib pairs). Analysis of linkage within this pooled group strengthened the evidence for linkage at SLI1 and yielded a highly significant LOD score (MLS = 7.46, interval empirical P<.0004). Furthermore, linkage at the same locus was also demonstrated to three reading-related measures (basic reading [MLS = 1.49], spelling [MLS = 2.67], and reading comprehension [MLS = 1.99] subtests of the Wechsler Objectives Reading Dimensions).
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Nieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I. and 3 moreNieuwland, M. S., Politzer-Ahles, S., Heyselaar, E., Segaert, K., Darley, E., Kazanina, N., Von Grebmer Zu Wolfsthurn, S., Bartolozzi, F., Kogan, V., Ito, A., Mézière, D., Barr, D. J., Rousselet, G., Ferguson, H. J., Busch-Moreno, S., Fu, X., Tuomainen, J., Kulakova, E., Husband, E. M., Donaldson, D. I., Kohút, Z., Rueschemeyer, S.-A., & Huettig, F. (2018). Large-scale replication study reveals a limit on probabilistic prediction in language comprehension. eLife, 7: e33468. doi:10.7554/eLife.33468.

    Abstract

    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words.

    Additional information

    Data sets
  • Niso, G., Gorgolewski, K. J., Bock, E., Brooks, T. L., Flandin, G., Gramfort, A., Henson, R. N., Jas, M., Litvak, V., Moreau, J. T., Oostenveld, R., Schoffelen, J.-M., Tadel, F., Wexler, J., & Baillet, S. (2018). MEG-BIDS, the brain imaging data structure extended to magnetoencephalography. Scientific Data, 5: 180110. doi:10.1038/sdata.2018.110.

    Abstract

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific
    aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond
    temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise
    magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data
    are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a
    principled solution to store, organise, process and share the multidimensional data volumes produced
    by the modality. The standard also includes well-defined metadata, to facilitate future data
    harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging
    community and paves the way to further integration of other techniques in electrophysiology. MEGBIDS
    builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several dataanalytics
    software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data
    resources available to everyone.
  • Noppeney, U., Jones, S. A., Rohe, T., & Ferrari, A. (2018). See what you hear – How the brain forms representations across the senses. Neuroforum, 24(4), 257-271. doi:10.1515/nf-2017-A066.

    Abstract

    Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.
  • Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Ogdie, M. N., Fisher, S. E., Yang, M., Ishii, J., Francks, C., Loo, S. K., Cantor, R. M., McCracken, J. T., McGough, J. J., Smalley, S. L., & Nelson, S. F. (2004). Attention Deficit Hyperactivity Disorder: Fine mapping supports linkage to 5p13, 6q12, 16p13, and 17p11. American Journal of Human Genetics, 75(4), 661-668. doi:10.1086/424387.

    Abstract

    We completed fine mapping of nine positional candidate regions for attention-deficit/hyperactivity disorder (ADHD) in an extended population sample of 308 affected sibling pairs (ASPs), constituting the largest linkage sample of families with ADHD published to date. The candidate chromosomal regions were selected from all three published genomewide scans for ADHD, and fine mapping was done to comprehensively validate these positional candidate regions in our sample. Multipoint maximum LOD score (MLS) analysis yielded significant evidence of linkage on 6q12 (MLS 3.30; empiric P=.024) and 17p11 (MLS 3.63; empiric P=.015), as well as suggestive evidence on 5p13 (MLS 2.55; empiric P=.091). In conjunction with the previously reported significant linkage on the basis of fine mapping 16p13 in the same sample as this report, the analyses presented here indicate that four chromosomal regions—5p13, 6q12, 16p13, and 17p11—are likely to harbor susceptibility genes for ADHD. The refinement of linkage within each of these regions lays the foundation for subsequent investigations using association methods to detect risk genes of moderate effect size.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Converging evidence from electrocorticography and BOLD fMRI for a sharp functional boundary in superior temporal gyrus related to multisensory speech processing. Frontiers in Human Neuroscience, 12: 141. doi:10.3389/fnhum.2018.00141.

    Abstract

    Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Frontal cortex selects representations of the talker’s mouth to aid in speech perception. eLife, 7: e30387. doi:10.7554/eLife.30387.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9: 1433. doi:10.3389/fpsyg.2018.01433.

    Abstract

    Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages – American Sign Language and British Sign Language, and two spoken languages – English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21: e12572. doi:10.1111/desc.12572.

    Abstract

    Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early-learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.
  • Petersson, K. M., Forkstam, C., & Ingvar, M. (2004). Artificial syntactic violations activate Broca’s region. Cognitive Science, 28(3), 383-407. doi:10.1207/s15516709cog2803_4.

    Abstract

    In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar.We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this studywas to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann's areas 44 and 45) or Broca's region. In the present study, we observed that artificial grammaticality violations activated Broca's region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested
    that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear
    lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last
    word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture
    onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However,
    the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram
    from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral
    frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect
    in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the
    alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior
    lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical
    clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two
    patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left
    lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context-
    driven word production.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pika, S., Wilkinson, R., Kendrick, K. H., & Vernes, S. C. (2018). Taking turns: Bridging the gap between human and animal communication. Proceedings of the Royal Society B: Biological Sciences, 285(1880): 20180598. doi:10.1098/rspb.2018.0598.

    Abstract

    Language, humans’ most distinctive trait, still remains a ‘mystery’ for evolutionary theory. It is underpinned by a universal infrastructure—cooperative turn-taking—which has been suggested as an ancient mechanism bridging the existing gap between the articulate human species and their inarticulate primate cousins. However, we know remarkably little about turn-taking systems of non-human animals, and methodological confounds have often prevented meaningful cross-species comparisons. Thus, the extent to which cooperative turn-taking is uniquely human or represents a homologous and/or analogous trait is currently unknown. The present paper draws attention to this promising research avenue by providing an overview of the state of the art of turn-taking in four animal taxa—birds, mammals, insects and anurans. It concludes with a new comparative framework to spur more research into this research domain and to test which elements of the human turn-taking system are shared across species and taxa.
  • Poletiek, F. H., Conway, C. M., Ellefson, M. R., Lai, J., Bocanegra, B. R., & Christiansen, M. H. (2018). Under what conditions can recursion be learned? Effects of starting small in artificial grammar learning of recursive structure. Cognitive Science, 42(8), 2855-2889. doi:10.1111/cogs.12685.

    Abstract

    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
  • Popov, T., Jensen, O., & Schoffelen, J.-M. (2018). Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage, 178, 277-286. doi:10.1016/j.neuroimage.2018.05.054.

    Abstract

    Oscillatory activity in the alpha and gamma bands is considered key in shaping functional brain architecture. Power
    increases in the high-frequency gamma band are typically reported in parallel to decreases in the low-frequency alpha
    band. However, their functional significance and in particular their interactions are not well understood. The present
    study shows that, in the context of an N-backworking memory task, alpha power decreases in the dorsal visual stream
    are related to gamma power increases in early visual areas. Granger causality analysis revealed directed interregional
    interactions from dorsal to ventral stream areas, in accordance with task demands. Present results reveal a robust,
    behaviorally relevant, and architectonically decisive power-to-power relationship between alpha and gamma activity.
    This relationship suggests that anatomically distant power fluctuations in oscillatory activity can link cerebral network
    dynamics on trial-by-trial basis during cognitive operations such as working memory
  • Popov, T., Oostenveld, R., & Schoffelen, J.-M. (2018). FieldTrip made easy: An analysis protocol for group analysis of the auditory steady state brain response in time, frequency, and space. Frontiers in Neuroscience, 12: 711. doi:10.3389/fnins.2018.00711.

    Abstract

    The auditory steady state evoked response (ASSR) is a robust and frequently utilized
    phenomenon in psychophysiological research. It reflects the auditory cortical response
    to an amplitude-modulated constant carrier frequency signal. The present report
    provides a concrete example of a group analysis of the EEG data from 29 healthy human
    participants, recorded during an ASSR paradigm, using the FieldTrip toolbox. First, we
    demonstrate sensor-level analysis in the time domain, allowing for a description of the
    event-related potentials (ERPs), as well as their statistical evaluation. Second, frequency
    analysis is applied to describe the spectral characteristics of the ASSR, followed by
    group level statistical analysis in the frequency domain. Third, we show how timeand
    frequency-domain analysis approaches can be combined in order to describe
    the temporal and spectral development of the ASSR. Finally, we demonstrate source
    reconstruction techniques to characterize the primary neural generators of the ASSR.
    Throughout, we pay special attention to explaining the design of the analysis pipeline
    for single subjects and for the group level analysis. The pipeline presented here can be
    adjusted to accommodate other experimental paradigms and may serve as a template
    for similar analyses.
  • Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.

    Abstract

    A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.

Share this page