Publications

Displaying 801 - 857 of 857
  • Verdonschot, R. G., Tokimoto, S., & Miyaoka, Y. (2019). The fundamental phonological unit of Japanese word production: An EEG study using the picture-word interference paradigm. Journal of Neurolinguistics, 51, 184-193. doi:10.1016/j.jneuroling.2019.02.004.

    Abstract

    It has been shown that in Germanic languages (e.g. English, Dutch) phonemes are the primary (or proximate) planning units during the early stages of phonological encoding. Contrastingly, in Chinese and Japanese the phoneme does not seem to play an important role but rather the syllable (Chinese) and mora (Japanese) are essential. However, despite the lack of behavioral evidence, neurocorrelational studies in Chinese suggested that electrophysiological brain responses (i.e. preceding overt responses) may indicate some significance for the phoneme. We investigated this matter in Japanese and our data shows that unlike in Chinese (for which the literature shows mixed effects), in Japanese both the behavioral and neurocorrelational data indicate an important role only for the mora (and not the phoneme) during the early stages of phonological encoding.
  • Verga, L., & Kotz, S. A. (2019). Putting language back into ecological communication contexts. Language, Cognition and Neuroscience, 34(4), 536-544. doi:10.1080/23273798.2018.1506886.

    Abstract

    Language is a multi-faceted form of communication. It is not until recently though that language research moved on from simple stimuli and protocols toward a more ecologically valid approach, namely “shifting” from words and simple sentences to stories with varying degrees of contextual complexity. While much needed, the use of ecologically valid stimuli such as stories should also be explored in interactive rather than individualistic experimental settings leading the way to an interactive neuroscience of language. Indeed, mounting evidence suggests that cognitive processes and their underlying neural activity significantly differ between social and individual experiences. We aim at reviewing evidence, which indicates that the characteristics of linguistic and extra-linguistic contexts may significantly influence communication–including spoken language comprehension. In doing so, we provide evidence on the use of new paradigms and methodological advancements that may enable the study of complex language features in a truly interactive, ecological way.
  • Verga, L., & Kotz, S. A. (2019). Spatial attention underpins social word learning in the right fronto-parietal network. NeuroImage, 195, 165-173. doi:10.1016/j.neuroimage.2019.03.071.

    Abstract

    In a multi- and inter-cultural world, we daily encounter new words. Adult learners often rely on a situational context to learn and understand a new word's meaning. Here, we explored whether interactive learning facilitates word learning by directing the learner's attention to a correct new word referent when a situational context is non-informative. We predicted larger involvement of inferior parietal, frontal, and visual cortices involved in visuo-spatial attention during interactive learning. We scanned participants while they played a visual word learning game with and without a social partner. As hypothesized, interactive learning enhanced activity in the right Supramarginal Gyrus when the situational context provided little information. Activity in the right Inferior Frontal Gyrus during interactive learning correlated with post-scanning behavioral test scores, while these scores correlated with activity in the Fusiform Gyrus in the non-interactive group. These results indicate that attention is involved in interactive learning when the situational context is minimal and suggest that individual learning processes may be largely different from interactive ones. As such, they challenge the ecological validity of what we know about individual learning and advocate the exploration of interactive learning in naturalistic settings.
  • Verheijen, J., Wong, S. Y., Rowe, J. H., Raymond, K., Stoddard, J., Delmonte, O. M., Bosticardo, M., Dobbs, K., Niemela, J., Calzoni, E., Pai, S.-Y., Choi, U., Yamazaki, Y., Comeau, A. M., Janssen, E., Henderson, L., Hazen, M., Berry, G., Rosenzweig, S. D., Aldhekri, H. H. and 3 moreVerheijen, J., Wong, S. Y., Rowe, J. H., Raymond, K., Stoddard, J., Delmonte, O. M., Bosticardo, M., Dobbs, K., Niemela, J., Calzoni, E., Pai, S.-Y., Choi, U., Yamazaki, Y., Comeau, A. M., Janssen, E., Henderson, L., Hazen, M., Berry, G., Rosenzweig, S. D., Aldhekri, H. H., He, M., Notarangelo, L. D., & Morava, E. (2020). Defining a new immune deficiency syndrome: MAN2B2-CDG. Journal of Allergy and Clinical Immunology, 145(3), 1008-1011. doi:10.1016/j.jaci.2019.11.016.
  • Verheijen, J., Tahata, S., Kozicz, T., Witters, P., & Morava, E. (2020). Therapeutic approaches in Congenital Disorders of Glycosylation (CDG) involving N-linked glycosylation: An update. Genetics in Medicine, 22(2), 268-279. doi:10.1038/s41436-019-0647-2.

    Abstract

    Congenital disorders of glycosylation (CDG) are a group of clinically and genetically heterogeneous metabolic disorders. Over 150 CDG types have been described. Most CDG types are ultrarare disorders. CDG types affecting N-glycosylation are the most common type of CDG with emerging therapeutic possibilities. This review is an update on the available therapies for disorders affecting the N-linked glycosylation pathway. In the first part of the review, we highlight the clinical presentation, general principles of management, and disease-specific therapies for N-linked glycosylation CDG types, organized by organ system. The second part of the review focuses on the therapeutic strategies currently available and under development. We summarize the successful (pre-) clinical application of nutritional therapies, transplantation, activated sugars, gene therapy, and pharmacological chaperones and outline the anticipated expansion of the therapeutic possibilities in CDG. We aim to provide a comprehensive update on the treatable aspects of CDG types involving N-linked glycosylation, with particular emphasis on disease-specific treatment options for the involved organ systems; call for natural history studies; and present current and future therapeutic strategies for CDG.
  • Verhoef, E., Demontis, D., Burgess, S., Shapland, C. Y., Dale, P. S., Okbay, A., Neale, B. M., Faraone, S. V., iPSYCH-Broad-PGC ADHD Consortium, Stergiakouli, E., Davey Smith, G., Fisher, S. E., Borglum, A., & St Pourcain, B. (2019). Disentangling polygenic associations between Attention-Deficit/Hyperactivity Disorder, educational attainment, literacy and language. Translational Psychiatry, 9: 35. doi:10.1038/s41398-018-0324-2.

    Abstract

    Interpreting polygenic overlap between ADHD and both literacy-related and language-related impairments is challenging as genetic associations might be influenced by indirectly shared genetic factors. Here, we investigate genetic overlap between polygenic ADHD risk and multiple literacy-related and/or language-related abilities (LRAs), as assessed in UK children (N ≤ 5919), accounting for genetically predictable educational attainment (EA). Genome-wide summary statistics on clinical ADHD and years of schooling were obtained from large consortia (N ≤ 326,041). Our findings show that ADHD-polygenic scores (ADHD-PGS) were inversely associated with LRAs in ALSPAC, most consistently with reading-related abilities, and explained ≤1.6% phenotypic variation. These polygenic links were then dissected into both ADHD effects shared with and independent of EA, using multivariable regressions (MVR). Conditional on EA, polygenic ADHD risk remained associated with multiple reading and/or spelling abilities, phonemic awareness and verbal intelligence, but not listening comprehension and non-word repetition. Using conservative ADHD-instruments (P-threshold < 5 × 10−8), this corresponded, for example, to a 0.35 SD decrease in pooled reading performance per log-odds in ADHD-liability (P = 9.2 × 10−5). Using subthreshold ADHD-instruments (P-threshold < 0.0015), these effects became smaller, with a 0.03 SD decrease per log-odds in ADHD risk (P = 1.4 × 10−6), although the predictive accuracy increased. However, polygenic ADHD-effects shared with EA were of equal strength and at least equal magnitude compared to those independent of EA, for all LRAs studied, and detectable using subthreshold instruments. Thus, ADHD-related polygenic links with LRAs are to a large extent due to shared genetic effects with EA, although there is evidence for an ADHD-specific association profile, independent of EA, that primarily involves literacy-related impairments.

    Additional information

    41398_2018_324_MOESM1_ESM.docx
  • Vernes, S. C. (2020). Understanding bat vocal learning to gain insight into speech and language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 6). Nijmegen: The Evolution of Language Conferences.
  • Vernes, S. C., & Wilkinson, G. S. (2020). Behaviour, biology, and evolution of vocal learning in bats. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1789): 20190061. doi:10.1098/rstb.2019.0061.

    Abstract

    The comparative approach can provide insight into the evolution of human speech, language and social communication by studying relevant traits in animal systems. Bats are emerging as a model system with great potential to shed light on these processes given their learned vocalizations, close social interactions, and mammalian brains and physiology. A recent framework outlined the multiple levels of investigation needed to understand vocal learning across a broad range of non-human species, including cetaceans, pinnipeds, elephants, birds and bats. Here, we apply this framework to the current state-of-the-art in bat research. This encompasses our understanding of the abilities bats have displayed for vocal learning, what is known about the timing and social structure needed for such learning, and current knowledge about the prevalence of the trait across the order. It also addresses the biology (vocal tract morphology, neurobiology and genetics) and evolution of this trait. We conclude by highlighting some key questions that should be answered to advance our understanding of the biological encoding and evolution of speech and spoken communication. This article is part of the theme issue 'What can animal communication teach us about human language?'

    Additional information

    earlier version of article on BioRxiv
  • Vernes, S. C. (2019). Neuromolecular approaches to the study of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 577-593). Cambridge, MA: MIT Press.
  • Versace, E., Rogge, J. R., Shelton-May, N., & Ravignani, A. (2019). Positional encoding in cotton-top tamarins (Saguinus oedipus). Animal Cognition, 22, 825-838. doi:10.1007/s10071-019-01277-y.

    Abstract

    Strategies used in artificial grammar learning can shed light into the abilities of different species to extract regularities from the environment. In the A(X)nB rule, A and B items are linked, but assigned to different positional categories and separated by distractor items. Open questions are how widespread is the ability to extract positional regularities from A(X)nB patterns, which strategies are used to encode positional regularities and whether individuals exhibit preferences for absolute or relative position encoding. We used visual arrays to investigate whether cotton-top tamarins (Saguinusoedipus) can learn this rule and which strategies they use. After training on a subset of exemplars, two of the tested monkeys successfully generalized to novel combinations. These tamarins discriminated between categories of tokens with different properties (A, B, X) and detected a positional relationship between non-adjacent items even in the presence of novel distractors. The pattern of errors revealed that successful subjects used visual similarity with training stimuli to solve the task and that successful tamarins extracted the relative position of As and Bs rather than their absolute position, similarly to what has been observed in other species. Relative position encoding appears to be favoured in different tasks and taxa. Generalization, though, was incomplete, since we observed a failure with items that during training had always been presented in reinforced arrays, showing the limitations in grasping the underlying positional rule. These results suggest the use of local strategies in the extraction of positional rules in cotton-top tamarins.

    Additional information

    Supplementary file
  • Verspeek, J., Staes, N., Van Leeuwen, E. J. C., Eens, M., & Stevens, J. M. G. (2019). Bonobo personality predicts friendship. Scientific Reports, 9: 19245. doi:10.1038/s41598-019-55884-3.

    Abstract

    In bonobos, strong bonds have been documented between unrelated females and between mothers
    and their adult sons, which can have important fitness benefits. Often age, sex or kinship similarity
    have been used to explain social bond strength variation. Recent studies in other species also stress
    the importance of personality, but this relationship remains to be investigated in bonobos. We used
    behavioral observations on 39 adult and adolescent bonobos housed in 5 European zoos to study the
    role of personality similarity in dyadic relationship quality. Dimension reduction analyses on individual
    and dyadic behavioral scores revealed multidimensional personality (Sociability, Openness, Boldness,
    Activity) and relationship quality components (value, compatibility). We show that, aside from
    relatedness and sex combination of the dyad, relationship quality is also associated with personality
    similarity of both partners. While similarity in Sociability resulted in higher relationship values, lower
    relationship compatibility was found between bonobos with similar Activity scores. The results of this
    study expand our understanding of the mechanisms underlying social bond formation in anthropoid
    apes. In addition, we suggest that future studies in closely related species like chimpanzees should
    implement identical methods for assessing bond strength to shed further light on the evolution of this
    phenomenon.

    Additional information

    Supplementary material
  • Vogels, J., Howcroft, D. M., Tourtouri, E. N., & Demberg, V. (2020). How speakers adapt object descriptions to listeners under load. Language, Cognition and Neuroscience, 35(1), 78-92. doi:10.1080/23273798.2019.1648839.

    Abstract

    A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener’s needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener’s reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener’s cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener’s needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.
  • De Vos, J., Schriefers, H., & Lemhöfer, K. (2020). Does study language (Dutch versus English) influence study success of Dutch and German students in theNetherlands? Dutch Journal of Applied Linguistics, 9, 60-78. doi:10.1075/dujal.19008.dev.

    Abstract

    We investigated whether the language of instruction (Dutch or English) influenced the study success of 614 Dutch and German first-year psychology students in the Netherlands. The Dutch students who were instructed in Dutch studied in their native language (L1), the other students in a second language (L2). In addition, only the Dutch students studied in their home country. Both these variables could potentially influence study success, operationalised as the number of European Credits (ECs) the students obtained, their grades, and drop-out rates. The L1 group outperformed the three L2 groups with respect to grades, but there were no significant differences in ECs and drop-out rates (although descriptively, the L1 group still performed best). In conclusion, this study shows an advantage of studying in the L1 when it comes to grades, and thereby contributes to the current debate in the Dutch media regarding the desirability of offering degrees taught in English.
  • De Vos, J., Schriefers, H., Bosch, L. t., & Lemhöfer, K. (2019). Interactive L2 vocabulary acquisition in a lab-based immersion setting. Language, Cognition and Neuroscience, 34(7), 916-935. doi:10.1080/23273798.2019.1599127.

    Abstract

    ABSTRACTWe investigated to what extent L2 word learning in spoken interaction takes place when learners are unaware of taking part in a language learning study. Using a novel paradigm for approximating naturalistic (but not necessarily non-intentional) L2 learning in the lab, German learners of Dutch were led to believe that the study concerned judging the price of objects. Dutch target words (object names) were selected individually such that these words were unknown to the respective participant. Then, in a dialogue-like task with the experimenter, the participants were first exposed to and then tested on the target words. In comparison to a no-input control group, we observed a clear learning effect especially from the first two exposures, and better learning for cognates than for non-cognates, but no modulating effect of the exposure-production lag. Moreover, some of the acquired knowledge persisted over a six-month period.
  • De Vos, J. (2019). Naturalistic word learning in a second language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Vosse, T., & Kempen, G. (1991). A hybrid model of human sentence processing: Parsing right-branching, center-embedded and cross-serial dependencies. In M. Tomita (Ed.), Proceedings of the Second International Workshop on Parsing Technologies.
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Waymel, A., Friedrich, P., Bastian, P.-A., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Anchoring the human olfactory system within a functional gradient. NeuroImage, 216: 116863. doi:10.1016/j.neuroimage.2020.116863.

    Abstract

    Margulies et al. (2016) demonstrated the existence of at least five independent functional connectivity gradients in the human brain. However, it is unclear how these functional gradients might link to anatomy. The dual origin theory proposes that differences in cortical cytoarchitecture originate from two trends of progressive differentiation between the different layers of the cortex, referred to as the hippocampocentric and olfactocentric systems. When conceptualising the functional connectivity gradients within the evolutionary framework of the Dual Origin theory, the first gradient likely represents the hippocampocentric system anatomically. Here we expand on this concept and demonstrate that the fifth gradient likely links to the olfactocentric system. We describe the anatomy of the latter as well as the evidence to support this hypothesis. Together, the first and fifth gradients might help to model the Dual Origin theory of the human brain and inform brain models and pathologies.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • De Weert, C., & Levelt, W. J. M. (1976). Comparison of normal and dichoptic colour mixing. Vision Research, 16, 59-70. doi:10.1016/0042-6989(76)90077-8.

    Abstract

    Dichoptic mixtures of equiluminous components of different wavelengths were matched with a binocularly presented "monocular" mixture of appropriate chosen amounts of the same colour components. Stimuli were chosen from the region of 490-630 nm. Although satisfactory colour matches could be obtained, dichoptic mixtures differed from normal mixtures to a considerable extent. Midspectral stimuli tended to be more dominant in the dichoptic mixtures than either short or long wavelength stimuli. An attempt was made to describe the relation between monocular and dichoptic mixtures with one function containing a wavelength variable and an eye dominance parameter.
  • De Weert, C., & Levelt, W. J. M. (1976). Dichoptic brightness combinations for unequally coloured lights. Vision Research, 16, 1077-1086.
  • Weissbart, H., Kandylaki, K. D., & Reichenbach, T. (2020). Cortical tracking of surprisal during continuous speech comprehension. Journal of Cognitive Neuroscience, 32, 155-166. doi:10.1162/jocn_a_01467.

    Abstract

    Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
  • Weissenborn, J., & Stralka, R. (1984). Das Verstehen von Mißverständnissen. Eine ontogenetische Studie. In Zeitschrift für Literaturwissenschaft und Linguistik (pp. 113-134). Stuttgart: Metzler.
  • Weissenborn, J. (1984). La genèse de la référence spatiale en langue maternelle et en langue seconde: similarités et différences. In G. Extra, & M. Mittner (Eds.), Studies in second language acquisition by adult immigrants (pp. 262-286). Tilburg: Tilburg University.
  • Weissenborn, J. (1988). Von der demonstratio ad oculos zur Deixis am Phantasma. Die Entwicklung der lokalen Referenz bei Kindern. In Karl Bühler's Theory of Language. Proceedings of the Conference held at Kirchberg, August 26, 1984 and Essen, November 21–24, 1984 (pp. 257-276). Amsterdam: Benjamins.
  • Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
  • Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences, 43(5), 271-273. doi:10.1016/j.tins.2020.03.003.

    Abstract

    People organize and convey their thoughts according to narratives. However, neuroscientists are often reluctant to incorporate narrative stimuli into their experiments. We argue that narratives deserve wider adoption in human neuroscience because they tap into the brain’s native machinery for representing the world and provide rich variability for testing hypotheses.
  • Wilson, B., Spierings, M., Ravignani, A., Mueller, J. L., Mintz, T. H., Wijnen, F., Van der Kant, A., Smith, K., & Rey, A. (2020). Non‐adjacent dependency learning in humans and other animals. Topics in Cognitive Science, 12(3), 843-858. doi:10.1111/tops.12381.

    Abstract

    Learning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence. However, learning nonadjacent dependencies appears to be more cognitively demanding than detecting dependencies between adjacent elements, and only occurs in certain circumstances. In this review, we discuss different types of nonadjacent dependencies in language and in artificial grammar learning experiments, and how these differences might impact learning. We summarize different types of perceptual cues that facilitate learning, by highlighting the relationship between dependent elements bringing them closer together either physically, attentionally, or perceptually. Finally, we review artificial grammar learning experiments in human adults, infants, and nonhuman animals, and discuss how similarities and differences observed across these groups can provide insights into how language is learned across development and how these language‐related abilities might have evolved.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittenburg, P., Lautenschlager, M., Thiemann, H., Baldauf, C., & Trilsbeek, P. (2020). FAIR Practices in Europe. Data Intelligence, 2(1-2), 257-263. doi:10.1162/dint_a_00048.

    Abstract

    Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society (MPS) took steps to optimize data management and stewardship to be able to address new scientific questions. In this paper we selected three institutes from the MPS from the areas of humanities, environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods. For this integration the typical challenges of fragmentation, bad quality and also social differences had to be overcome. In all three cases, well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars. It is not surprising that these principles are very much aligned with what have now become the FAIR principles. The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address.
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Woensdregt, M., & Dingemanse, M. (2020). Other-initiated repair can facilitate the emergence of compositional language. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 474-476). Nijmegen: The Evolution of Language Conferences.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2020). Less is Better: A cognitively inspired unsupervised model for language segmentation. In M. Zock, E. Chersoni, A. Lenci, & E. Santus (Eds.), Proceedings of the Workshop on the Cognitive Aspects of the Lexicon ( 28th International Conference on Computational Linguistics) (pp. 33-45). Stroudsburg: Association for Computational Linguistics.

    Abstract

    Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels. Although we can do such unitization/segmentation easily, its cognitive mechanism is still not clear. This paper proposes an unsupervised model, Less-is-Better (LiB), to simulate the human cognitive process with respect to language unitization/segmentation. LiB follows the principle of least effort and aims to build a lexicon which minimizes the number of unit tokens (alleviating the effort of analysis) and number of unit types (alleviating the effort of storage) at the same time on any given corpus. LiB’s workflow is inspired by empirical cognitive phenomena. The design makes the mechanism of LiB cognitively plausible and the computational requirement light-weight. The lexicon generated by LiB performs the best among different types of lexicons (e.g. ground-truth words) both from an information-theoretical view and a cognitive view, which suggests that the LiB lexicon may be a plausible proxy of the mental lexicon.

    Additional information

    full text via ACL website
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zhang, Y., Amatuni, A., Crain, E., & Yu, C. (2020). Seeking meaning: Examining a cross-situational solution to learn action verbs using human simulation paradigm. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 2854-2860). Montreal, QB: Cognitive Science Society.

    Abstract

    To acquire the meaning of a verb, language learners not only need to find the correct mapping between a specific verb and an action or event in the world, but also infer the underlying relational meaning that the verb encodes. Most verb naming instances in naturalistic contexts are highly ambiguous as many possible actions can be embedded in the same scenario and many possible verbs can be used to describe those actions. To understand whether learners can find the correct verb meaning from referentially ambiguous learning situations, we conducted three experiments using the Human Simulation Paradigm with adult learners. Our results suggest that although finding the right verb meaning from one learning instance is hard, there is a statistical solution to this problem. When provided with multiple verb learning instances all referring to the same verb, learners are able to aggregate information across situations and gradually converge to the correct semantic space. Even in cases where they may not guess the exact target verb, they can still discover the right meaning by guessing a similar verb that is semantically close to the ground truth.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X. (2020). Control and monitoring in bilingual speech production: Language selection, switching and intrusion. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2020). Verb production and comprehension in dementia: A verb argument structure approach. Master Thesis, Aristotle University of Thessaloniki, Thessaloniki, Greece.

    Abstract

    The purpose of this study is to shed light to the linguistic deficit in populations with dementia, and more specifically with Mild Cognitive Impairment and Alzheimer’s Disease; by examining the assignment of thematic roles (θ-roles) in sentences including psychological verbs.
    The interest in types of dementia and its precursor is due to the relevance of the disease in present-day world society (Caloi, 2017). 47 millions of people worldwide were reported by the World Alzheimer Report in 2016 (Prince et al. 2016) as people with a type of dementia. This number surpasses the number of inhabitants in Spain, a whole country, and it is expected, according to the report, to triplicate until 2050 reaching the number of 131 million. The impact of this disease is observed not only at the social level but also in the economic one, because of their need for assistance in their everyday life. What is worrying, is the lack of total treatment once the disease has started. Despite the efforts of medicine, dementia is problematic in terms of its diagnosis, because a variety of cognitive abilities is assessed in combination with medical workup. Language is a crucial component in the procedure of diagnosis as linguistic deficits are among the first symptoms that accompany the onset of the disease. Therefore, further investigation of linguistic impairment is a necessity in order to enhance the diagnostic techniques used nowadays. Furthermore, the lack of efficient drugs for the treatment of the disease has necessitated the development of training programs for maintenance and increase of the cognitive abilities in people with either Mild Cognitive Impairment or a type of dementia …
  • Zinken, J., Rossi, G., & Reddy, V. (2020). Doing more than expected: Thanking recognizes another's agency in providing assistance. In C. Taleghani-Nikazm, E. Betz, & P. Golato (Eds.), Mobilizing others: Grammar and lexis within larger activities (pp. 253-278). Amsterdam: John Benjamins.

    Abstract

    In informal interaction, speakers rarely thank a person who has complied with a request. Examining data from British English, German, Italian, Polish, and Telugu, we ask when speakers do thank after compliance. The results show that thanking treats the other’s assistance as going beyond what could be taken for granted in the circumstances. Coupled with the rareness of thanking after requests, this suggests that cooperation is to a great extent governed by expectations of helpfulness, which can be long-standing, or built over the course of a particular interaction. The higher frequency of thanking in some languages (such as English or Italian) suggests that cultures differ in the importance they place on recognizing the other’s agency in doing as requested.
  • Zora, H., Rudner, M., & Montell Magnusson, A. (2020). Concurrent affective and linguistic prosody with the same emotional valence elicits a late positive ERP response. European Journal of Neuroscience, 51(11), 2236-2249. doi:10.1111/ejn.14658.

    Abstract

    Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words—[ˈfɑ́ːsɛn] phase and [ˈfɑ̀ːsɛn] damn—that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords—[ˈvɑ́ːsɛm] and [ˈvɑ̀ːsɛm]—were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence—angry [ˈfɑ̀ːsɛn] damn—would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300–350 ms, and affective prosody evoked a P3a at 350–400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820–870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zormpa, E. (2020). Memory for speaking and listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zuidema, W., French, R. M., Alhama, R. G., Ellis, K., O'Donnell, T. J. O., Sainburgh, T., & Gentner, T. Q. (2020). Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning. Topics in Cognitive Science, 12(3), 925-941. doi:10.1111/tops.12474.

    Abstract

    There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.

Share this page