Publications

Displaying 801 - 883 of 883
  • van der Burght, C. L., Numssen, O., Schlaak, B., Goucha, T., & Hartwigsen, G. (2023). Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Human Brain Mapping, 44(2), 585-598. doi:10.1002/hbm.26086.

    Abstract

    Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation and its functional relevance has yet to be confirmed. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electrical field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.

    Additional information

    supplementary information
  • Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4): e13268. doi:10.1111/cogs.13268.

    Abstract

    Iconicity, or the resemblance between form and meaning, is often ascribed to a special status and contrasted with default assumptions of arbitrariness in spoken language. But does iconicity in spoken language have a special status when it comes to learnability? A simple way to gauge learnability is to see how well something is retrieved from memory. We can further contrast this with guessability, to see (1) whether the ease of guessing the meanings of ideophones outperforms the rate at which they are remembered; and (2) how willing participants’ are to reassess what they were taught in a prior task—a novel contribution of this study. We replicate prior guessing and memory tasks using ideophones and adjectives from Japanese, Korean, and Igbo. Our results show that although native Cantonese speakers guessed ideophone meanings above chance level, they memorized both ideophones and adjectives with comparable accuracy. However, response time data show that participants took significantly longer to respond correctly to adjective–meaning pairs—indicating a discrepancy in a cognitive effort that favored the recognition of ideophones. In a follow-up reassessment task, participants who were taught foil translations were more likely to choose the true translations for ideophones rather than adjectives. By comparing the findings from our guessing and memory tasks, we conclude that iconicity is more accessible if a task requires participants to actively seek out sound-meaning associations.
  • Van Wonderen, E., & Nieuwland, M. S. (2023). Lexical prediction does not rationally adapt to prediction error: ERP evidence from pre-nominal articles. Journal of Memory and Language, 132: 104435. doi:10.1016/j.jml.2023.104435.

    Abstract

    People sometimes predict upcoming words during language comprehension, but debate remains on when and to what extent such predictions indeed occur. The rational adaptation hypothesis holds that predictions develop with expected utility: people predict more strongly when predictions are frequently confirmed (low prediction error) rather than disconfirmed. However, supporting evidence is mixed thus far and has only involved measuring responses to supposedly predicted nouns, not to preceding articles that may also be predicted. The current, large-sample (N = 200) ERP study on written discourse comprehension in Dutch therefore employs the well-known ‘pre-nominal prediction effect’: enhanced N400-like ERPs for articles that are unexpected given a likely upcoming noun’s gender (i.e., the neuter gender article ‘het’ when people expect the common gender noun phrase ‘de krant’, the newspaper) compared to expected articles. We investigated whether the pre-nominal prediction effect is larger when most of the presented stories contain predictable article-noun combinations (75% predictable, 25% unpredictable) compared to when most stories contain unpredictable combinations (25% predictable, 75% unpredictable). Our results show the pre-nominal prediction effect in both contexts, with little evidence to suggest that this effect depended on the percentage of predictable combinations. Moreover, the little evidence suggesting such a dependence was primarily observed for unexpected, neuter-gender articles (‘het’), which is inconsistent with the rational adaptation hypothesis. In line with recent demonstrations (Nieuwland, 2021a,b), our results suggest that linguistic prediction is less ‘rational’ or Bayes optimal than is often suggested.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activitity during speaking: From syntax to phonology in 40 milliseconds. Science, 280, 572-574.

    Abstract

    In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1998). Brain activity during speaking: From syntax to phonology in 40 milliseconds. Science, 280(5363), 572-574. doi:10.1126/science.280.5363.572.
  • Van Alphen, P. M., De Bree, E., Gerrits, E., De Jong, J., Wilsenach, C., & Wijnen, F. (2004). Early language development in children with a genetic risk of dyslexia. Dyslexia, 10, 265-288. doi:10.1002/dys.272.

    Abstract

    We report on a prospective longitudinal research programme exploring the connection between language acquisition deficits and dyslexia. The language development profile of children at-risk for dyslexia is compared to that of age-matched controls as well as of children who have been diagnosed with specific language impairment (SLI). The experiments described concern the perception and production of grammatical morphology, categorical perception of speech sounds, phonological processing (non-word repetition), mispronunciation detection, and rhyme detection. The results of each of these indicate that the at-risk children as a group underperform in comparison to the controls, and that, in most cases, they approach the SLI group. It can be concluded that dyslexia most likely has precursors in language development, also in domains other than those traditionally considered conditional for the acquisition of literacy skills. The dyslexia-SLI connection awaits further, particularly qualitative, analyses.
  • Van den Bos, E., & Poletiek, F. H. (2008). Effects of grammar complexity on artificial grammar learning. Memory & Cognition, 36(6), 1122-1131. doi:10.3758/MC.36.6.1122.

    Abstract

    The present study identified two aspects of complexity that have been manipulated in the implicit learning literature and investigated how they affect implicit and explicit learning of artificial grammars. Ten finite state grammars were used to vary complexity. The results indicated that dependency length is more relevant to the complexity of a structure than is the number of associations that have to be learned. Although implicit learning led to better performance on a grammaticality judgment test than did explicit learning, it was negatively affected by increasing complexity: Performance decreased as there was an increase in the number of previous letters that had to be taken into account to determine whether or not the next letter was a grammatical continuation. In particular, the results suggested that implicit learning of higher order dependencies is hampered by the presence of longer dependencies. Knowledge of first-order dependencies was acquired regardless of complexity and learning mode.
  • Van den Brink, D., Van Berkum, J. J. A., Bastiaansen, M. C. M., Tesink, C. M. J. Y., Kos, M., Buitelaar, J. K., & Hagoort, P. (2012). Empathy matters: ERP evidence for inter-individual differences in social language processing. Social, Cognitive and Affective Neuroscience, 7, 173-182. doi:10.1093/scan/nsq094.

    Abstract

    When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences.
  • Van de Geer, J. P., & Levelt, W. J. M. (1963). Detection of visual patterns disturbed by noise: An exploratory study. Quarterly Journal of Experimental Psychology, 15, 192-204. doi:10.1080/17470216308416324.

    Abstract

    An introductory study of the perception of stochastically specified events is reported. The initial problem was to determine whether the perceiver can split visual input data of this kind into random and determined components. The inability of subjects to do so with the stimulus material used (a filmlike sequence of dot patterns), led to the more general question of how subjects code this kind of visual material. To meet the difficulty of defining the subjects' responses, two experiments were designed. In both, patterns were presented as a rapid sequence of dots on a screen. The patterns were more or less disturbed by “noise,” i.e. the dots did not appear exactly at their proper places. In the first experiment the response was a rating on a semantic scale, in the second an identification from among a set of alternative patterns. The results of these experiments give some insight in the coding systems adopted by the subjects. First, noise appears to be detrimental to pattern recognition, especially to patterns with little spread. Second, this shows connections with the factors obtained from analysis of the semantic ratings, e.g. easily disturbed patterns show a large drop in the semantic regularity factor, when only a little noise is added.
  • Van Alphen, P. M., & Smits, R. (2004). Acoustical and perceptual analysis of the voicing distinction in Dutch initial plosives: The role of prevoicing. Journal of Phonetics, 32(4), 455-491. doi:10.1016/j.wocn.2004.05.001.

    Abstract

    Three experiments investigated the voicing distinction in Dutch initial labial and alveolar plosives. The difference between voiced and voiceless Dutch plosives is generally described in terms of the presence or absence of prevoicing (negative voice onset time). Experiment 1 showed, however, that prevoicing was absent in 25% of voiced plosive productions across 10 speakers. The production of prevoicing was influenced by place of articulation of the plosive, by whether the plosive occurred in a consonant cluster or not, and by speaker sex. Experiment 2 was a detailed acoustic analysis of the voicing distinction, which identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. These secondary cues were different for the two places of articulation. We discuss the paradox raised by these findings: although prevoicing is the most reliable cue to the voicing distinction for listeners, it is not reliably produced by speakers.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Van den Bos, E., & Poletiek, F. H. (2008). Intentional artificial grammar learning: When does it work? European Journal of Cognitive Psychology, 20(4), 793-806. doi:10.1080/09541440701554474.

    Abstract

    Actively searching for the rules of an artificial grammar has often been shown to produce no more knowledge than memorising exemplars without knowing that they have been generated by a grammar. The present study investigated whether this ineffectiveness of intentional learning could be overcome by removing dual task demands and providing participants with more specific instructions. The results only showed a positive effect of learning intentionally for participants specifically instructed to find out which letters are allowed to follow each other. These participants were also unaffected by a salient feature. In contrast, for participants who did not know what kind of structure to expect, intentional learning was not more effective than incidental learning and knowledge acquisition was guided by salience.
  • Van Leeuwen, E. J. C., Cronin, K. A., Haun, D. B. M., Mundry, R., & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society B: Biological Sciences, 279, 4362-4367. doi:10.1098/rspb.2012.1543.

    Abstract

    Grooming handclasp (GHC) behaviour was originally advocated as the first evidence of social culture in chimpanzees owing to the finding that some populations engaged in the behaviour and others do not. To date, however, the validity of this claim and the extent to which this social behaviour varies between groups is unclear. Here, we measured (i) variation, (ii) durability and (iii) expansion of the GHC behaviour in four chimpanzee communities that do not systematically differ in their genetic backgrounds and live in similar ecological environments. Ninety chimpanzees were studied for a total of 1029 h; 1394 GHC bouts were observed between 2010 and 2012. Critically, GHC style (defined by points of bodily contact) could be systematically linked to the chimpanzee’s group identity, showed temporal consistency both withinand between-groups, and could not be accounted for by the arm-length differential between partners. GHC has been part of the behavioural repertoire of the chimpanzees under study for more than 9 years (surpassing durability criterion) and spread across generations (surpassing expansion criterion). These results strongly indicate that chimpanzees’ social behaviour is not only motivated by innate predispositions and individual inclinations, but may also be partly cultural in nature.
  • Van Wingen, G. A., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J. K., & Fernández, G. (2008). Progesterone selectively increases amygdala reactivity in women. Molecular Psychiatry, 13, 325-333. doi:doi:10.1038/sj.mp.4002030.

    Abstract

    The acute neural effects of progesterone are mediated by its neuroactive metabolites allopregnanolone and pregnanolone. These neurosteroids potentiate the inhibitory actions of c-aminobutyric acid (GABA). Progesterone is known to produce anxiolytic effects in animals, but recent animal studies suggest that pregnanolone increases anxiety after a period of low allopregnanolone concentration. This effect is potentially mediated by the amygdala and related to the negative mood symptoms in humans that are observed during increased allopregnanolone levels. Therefore, we investigated with functional magnetic resonance imaging (MRI) whether a single progesterone administration to healthy young women in their follicular phase modulates the amygdala response to salient, biologically relevant stimuli. The progesterone administration increased the plasma concentrations of progesterone and allopregnanolone to levels that are reached during the luteal phase and early pregnancy. The imaging results show that progesterone selectively increased amygdala reactivity. Furthermore, functional connectivity analyses indicate that progesterone modulated functional coupling of the amygdala with distant brain regions. These results reveal a neural mechanism by which progesterone may mediate adverse effects on anxiety and mood.
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2012). Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late. Frontiers in Psychology, 3, 190. doi:10.3389/fpsyg.2012.00190.

    Abstract

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like 'day' in 'daisy', or 'dean' in 'sardine'. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding ('day' in 'daisy') did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding ('dean' in 'sardine') did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.
  • Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.

    Abstract

    Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
  • Van de Ven, M., Ernestus, M., & Schreuder, R. (2012). Predicting acoustically reduced words in spontaneous speech: The role of semantic/syntactic and acoustic cues in context. Laboratory Phonology, 3, 455-481. doi:10.1515/lp-2012-0020.

    Abstract

    In spontaneous speech, words may be realised shorter than in formal speech (e.g., English yesterday may be pronounced like [jɛʃeɩ]). Previous research has shown that context is required to understand highly reduced pronunciation variants. We investigated the extent to which listeners can predict low predictability reduced words on the basis of the semantic/syntactic and acoustic cues in their context. In four experiments, participants were presented with either the preceding context or the preceding and following context of reduced words, and either heard these fragments of conversational speech, or read their orthographic transcriptions. Participants were asked to predict the missing reduced word on the basis of the context alone, choosing from four plausible options. Participants made use of acoustic cues in the context, although casual speech typically has a high speech rate, and acoustic cues are much more unclear than in careful speech. Moreover, they relied on semantic/syntactic cues. Whenever there was a conflict between acoustic and semantic/syntactic contextual cues, measured as the word's probability given the surrounding words, listeners relied more heavily on acoustic cues. Further, context appeared generally insufficient to predict the reduced words, underpinning the significance of the acoustic characteristics of the reduced words themselves.
  • Van Berkum, J. J. A. (2012). Zonder gevoel geen taal. Neerlandistiek.nl. Wetenschappelijk tijdschrift voor de Nederlandse taal- en letterkunde, 12(01).

    Abstract

    Geïllustreerde herpublicatie van de oratie, uitgesproken bij het aanvaarden van de leeropdracht Discourse, cognitie en communicatie op 30 september 2011 (Universiteit Utrecht). In tegenstelling tot de oorspronkelijke oratie-tekst bevat deze herpublicatie ook diverse illustraties en links. Daarnaast is er in twee aansluitende artikelen door vakgenoten op gereageerd (zie http://www.neerlandistiek.nl/12.01a/ en http://www.neerlandistiek.nl/12.01b/)
  • Van der Werf, O. J., Schuhmann, T., De Graaf, T., Ten Oever, S., & Sack, A. T. (2023). Investigating the role of task relevance during rhythmic sampling of spatial locations. Scientific Reports, 13: 12707. doi:10.1038/s41598-023-38968-z.

    Abstract

    Recently it has been discovered that visuospatial attention operates rhythmically, rather than being stably employed over time. A low-frequency 7–8 Hz rhythmic mechanism coordinates periodic windows to sample relevant locations and to shift towards other, less relevant locations in a visual scene. Rhythmic sampling theories would predict that when two locations are relevant 8 Hz sampling mechanisms split into two, effectively resulting in a 4 Hz sampling frequency at each location. Therefore, it is expected that rhythmic sampling is influenced by the relative importance of locations for the task at hand. To test this, we employed an orienting task with an arrow cue, where participants were asked to respond to a target presented in one visual field. The cue-to-target interval was systematically varied, allowing us to assess whether performance follows a rhythmic pattern across cue-to-target delays. We manipulated a location’s task relevance by altering the validity of the cue, thereby predicting the correct location in 60%, 80% or 100% of trials. Results revealed significant 4 Hz performance fluctuations at cued right visual field targets with low cue validity (60%), suggesting regular sampling of both locations. With high cue validity (80%), we observed a peak at 8 Hz towards non-cued targets, although not significant. These results were in line with our hypothesis suggesting a goal-directed balancing of attentional sampling (cued location) and shifting (non-cued location) depending on the relevance of locations in a visual scene. However, considering the hemifield specificity of the effect together with the absence of expected effects for cued trials in the high valid conditions we further discuss the interpretation of the data.

    Additional information

    supplementary information
  • van der Burght, C. L., Friederici, A. D., Maran, M., Papitto, G., Pyatigorskaya, E., Schroen, J., Trettenbrein, P., & Zaccarella, E. (2023). Cleaning up the brickyard: How theory and methodology shape experiments in cognitive neuroscience of language. Journal of Cognitive Neuroscience, 35(12), 2067-2088. doi:10.1162/jocn_a_02058.

    Abstract

    The capacity for language is a defining property of our species, yet despite decades of research evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining “language” in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement amongst cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modelling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
  • Verdonschot, R. G., Middelburg, R., Lensink, S. E., & Schiller, N. O. (2012). Morphological priming survives a language switch. Cognition, 124(3), 343-349. doi:10.1016/j.cognition.2012.05.019.

    Abstract

    In a long-lag morphological priming experiment, Dutch (L1)-English (L2) bilinguals were asked to name pictures and read aloud words. A design using non-switch blocks, consisting solely of Dutch stimuli, and switch-blocks, consisting of Dutch primes and targets with intervening English trials, was administered. Target picture naming was facilitated by morphologically related primes in both non-switch and switch blocks with equal magnitude. These results contrast some assumptions of sustained reactive inhibition models. However, models that do not assume bilinguals having to reactively suppress all activation of the non-target language can account for these data. (C) 2012 Elsevier B.V. All rights reserved.
  • Verga, L., D’Este, G., Cassani, S., Leitner, C., Kotz, S. A., Ferini-Strambi, L., & Galbiati, A. (2023). Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One, 18(3): e0283221. doi:10.1371/journal.pone.0283221.

    Abstract

    Some people report being able to spontaneously “time” the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
  • Verga, L., Kotz, S. A., & Ravignani, A. (2023). The evolution of social timing. Physics of Life Reviews, 46, 131-151. doi:10.1016/j.plrev.2023.06.006.

    Abstract

    Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
  • Verhoeven, L., Baayen, R. H., & Schreuder, R. (2004). Orthographic constraints and frequency effects in complex word identification. Written Language and Literacy, 7(1), 49-59.

    Abstract

    In an experimental study we explored the role of word frequency and orthographic constraints in the reading of Dutch bisyllabic words. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme E may represent three different vowels: /ε /, /e/, or /œ /. In the experiment, skilled adult readers were presented lists of bisyllabic words containing the vowel E in the initial syllable and the same grapheme or another vowel in the second syllable. We expected word frequency to be related to word latency scores. On the basis of general word frequency data, we also expected the interpretation of the initial syllable as a stressed /e/ to be facilitated as compared to the interpretation of an unstressed /œ /. We found a strong negative correlation between word frequency and latency scores. Moreover, for words with E in either syllable we found a preference for a stressed /e/ interpretation, indicating a lexical frequency effect. The results are discussed with reference to a parallel dual-route model of word decoding.
  • Vernes, S. C., Newbury, D. F., Abrahams, B. S., Winchester, L., Nicod, J., Groszer, M., Alarcón, M., Oliver, P. L., Davies, K. E., Geschwind, D. H., Monaco, A. P., & Fisher, S. E. (2008). A functional genetic link between distinct developmental language disorders. New England Journal of Medicine, 359(22), 2337 -2345. doi:10.1056/NEJMoa0802828.

    Abstract

    BACKGROUND: Rare mutations affecting the FOXP2 transcription factor cause a monogenic speech and language disorder. We hypothesized that neural pathways downstream of FOXP2 influence more common phenotypes, such as specific language impairment. METHODS: We performed genomic screening for regions bound by FOXP2 using chromatin immunoprecipitation, which led us to focus on one particular gene that was a strong candidate for involvement in language impairments. We then tested for associations between single-nucleotide polymorphisms (SNPs) in this gene and language deficits in a well-characterized set of 184 families affected with specific language impairment. RESULTS: We found that FOXP2 binds to and dramatically down-regulates CNTNAP2, a gene that encodes a neurexin and is expressed in the developing human cortex. On analyzing CNTNAP2 polymorphisms in children with typical specific language impairment, we detected significant quantitative associations with nonsense-word repetition, a heritable behavioral marker of this disorder (peak association, P=5.0x10(-5) at SNP rs17236239). Intriguingly, this region coincides with one associated with language delays in children with autism. CONCLUSIONS: The FOXP2-CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language.

    Additional information

    nejm_vernes_2337sa1.pdf
  • Vessel, E. A., Pasqualette, L., Uran, C., Koldehoff, S., Bignardi, G., & Vinck, M. (2023). Self-relevance predicts the aesthetic appeal of real and synthetic artworks generated via neural style transfer. Psychological Science, 34(9), 1007-1023. doi:10.1177/09567976231188107.

    Abstract

    What determines the aesthetic appeal of artworks? Recent work suggests that aesthetic appeal can, to some extent, be predicted from a visual artwork’s image features. Yet a large fraction of variance in aesthetic ratings remains unexplained and may relate to individual preferences. We hypothesized that an artwork’s aesthetic appeal depends strongly on self-relevance. In a first study (N = 33 adults, online replication N = 208), rated aesthetic appeal for real artworks was positively predicted by rated self-relevance. In a second experiment (N = 45 online), we created synthetic, self-relevant artworks using deep neural networks that transferred the style of existing artworks to photographs. Style transfer was applied to self-relevant photographs selected to reflect participant-specific attributes such as autobiographical memories. Self-relevant, synthetic artworks were rated as more aesthetically appealing than matched control images, at a level similar to human-made artworks. Thus, self-relevance is a key determinant of aesthetic appeal, independent of artistic skill and image features.

    Additional information

    supplementary materials
  • Viaro, M., Bercelli, F., & Rossano, F. (2008). Una relazione terapeutica: Il terapeuta allenatore. Connessioni: Rivista di consulenza e ricerca sui sistemi umani, 20, 95-105.
  • Vigliocco, G., Vinson, D. P., Indefrey, P., Levelt, W. J. M., & Hellwig, F. M. (2004). Role of grammatical gender and semantics in German word production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 483-497. doi:10.1037/0278-7393.30.2.483.

    Abstract

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Marx, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender preservation effect in German. In all experiments, semantic substitution errors were induced using a continuous naming paradigm. In Experiment 1, it was found that gender preservation disappeared when speakers produced bare nouns. Gender preservation was found when speakers produced phrases with determiners marked for gender (Experiment 2) but not when the produced determiners were not marked for gender (Experiment 3). These results are discussed in the context of models of lexical retrieval during production.
  • Vingerhoets, G., Verhelst, H., Gerrits, R., Badcock, N., Bishop, D. V. M., Carey, D., Flindall, J., Grimshaw, G., Harris, L. J., Hausmann, M., Hirnstein, M., Jäncke, L., Joliot, M., Specht, K., Westerhausen, R., & LICI consortium (2023). Laterality indices consensus initiative (LICI): A Delphi expert survey report on recommendations to record, assess, and report asymmetry in human behavioural and brain research. Laterality, 28(2-3), 122-191. doi:10.1080/1357650X.2023.2199963.

    Abstract

    Laterality indices (LIs) quantify the left-right asymmetry of brain and behavioural variables and provide a measure that is statistically convenient and seemingly easy to interpret. Substantial variability in how structural and functional asymmetries are recorded, calculated, and reported, however, suggest little agreement on the conditions required for its valid assessment. The present study aimed for consensus on general aspects in this context of laterality research, and more specifically within a particular method or technique (i.e., dichotic listening, visual half-field technique, performance asymmetries, preference bias reports, electrophysiological recording, functional MRI, structural MRI, and functional transcranial Doppler sonography). Experts in laterality research were invited to participate in an online Delphi survey to evaluate consensus and stimulate discussion. In Round 0, 106 experts generated 453 statements on what they considered good practice in their field of expertise. Statements were organised into a 295-statement survey that the experts then were asked, in Round 1, to independently assess for importance and support, which further reduced the survey to 241 statements that were presented again to the experts in Round 2. Based on the Round 2 input, we present a set of critically reviewed key recommendations to record, assess, and report laterality research for various methods.

    Files private

    Request files
  • Voermans, N. C., Petersson, K. M., Daudey, L., Weber, B., Van Spaendonck, K. P., Kremer, H. P. H., & Fernández, G. (2004). Interaction between the Human Hippocampus and the Caudate Nucleus during Route Recognition. Neuron, 43, 427-435. doi:10.1016/j.neuron.2004.07.009.

    Abstract

    Navigation through familiar environments can rely upon distinct neural representations that are related to different memory systems with either the hippo-campus or the caudate nucleus at their core. However,it is a fundamental question whether and how these systems interact during route recognition. To address this issue, we combined a functional neuroimaging approach with a naturally occurring, well-controlled humanmodel of caudate nucleus dysfunction (i.e., pre-clinical and early-stage Huntington’s disease). Our results reveal a noncompetitive interaction so that the hippocampus compensates for gradual caudate nucleus dysfunction with a gradual activity increase,maintaining normal behavior. Furthermore, we revealed an interaction between medial temporal and caudate activity in healthy subjects, which was adaptively modified in Huntington patients to allow compensatory hippocampal processing. Thus, the two memory systems contribute in a noncompetitive, co operative manner to route recognition, which enables Polthe hippocampus to compensate seamlessly for the functional degradation of the caudate nucleus
  • von Stutterheim, C., Andermann, M., Carroll, M., Flecken, M., & Schmiedtova, B. (2012). How grammaticized concepts shape event conceptualization in language production: Insights from linguistic analysis, eye tracking data, and memory performance. Linguistics, 50(4), 833-867. doi:10.1515/ling-2012-0026.

    Abstract

    The role of grammatical systems in profiling particular conceptual categories is used as a key in exploring questions concerning language specificity during the conceptualization phase in language production. This study focuses on the extent to which crosslinguistic differences in the concepts profiled by grammatical means in the domain of temporality (grammatical aspect) affect event conceptualization and distribution of attention when talking about motion events. The analyses, which cover native speakers of Standard Arabic, Czech, Dutch, English, German, Russian and Spanish, not only involve linguistic evidence, but also data from an eye tracking experiment and a memory test. The findings show that direction of attention to particular parts of motion events varies to some extent with the existence of grammaticized means to express imperfective/progressive aspect. Speakers of languages that do not have grammaticized aspect of this type are more likely to take a holistic view when talking about motion events and attend to as well as refer to endpoints of motion events, in contrast to speakers of aspect languages.

    Files private

    Request files
  • De Vos, C., & Palfreyman, N. (2012). [Review of the book Deaf around the World: The impact of language / ed. by Mathur & Napoli]. Journal of Linguistics, 48, 731 -735.

    Abstract

    First paragraph. Since its advent half a century ago, the field of sign language linguistics has had close ties to education and the empowerment of deaf communities, a union that is fittingly celebrated by Deaf around the world: The impact of language. With this fruitful relationship in mind, sign language researchers and deaf educators gathered in Philadelphia in 2008, and in the volume under review, Gaurav Mathur & Donna Jo Napoli (henceforth M&N) present a selection of papers from this conference, organised in two parts: ‘Sign languages: Creation, context, form’, and ‘Social issues/civil rights ’. Each of the chapters is accompanied by a response chapter on the same or a related topic. The first part of the volume focuses on the linguistics of sign languages and includes papers on the impact of language modality on morphosyntax, second language acquisition, and grammaticalisation, highlighting the fine balance that sign linguists need to strike when conducting methodologically sound research. The second part of the book includes accounts by deaf activists from countries including China, India, Japan, Kenya, South Africa and Sweden who are considered prominent figures in areas such as deaf education, politics, culture and international development.
  • De Vos, C. (2008). Janger Kolok: de Balinese dovendans. Woord en Gebaar, 12-13.
  • De Vos, C. (2004). Over de biologische functie van taal: Pinker vs. Chomsky. Honours Review, 2(1), 20-25.

    Abstract

    Hoe is de complexe taal van de mens ontstaan? Geleidelijk door natuurlijke selectie, omdat groeiende grammaticale vermogens voor de mens een evolutionair voordeel opleverden? Of plotseling, als onbedoeld bijproduct of neveneffect van een genetische mutatie, zonder dat er sprake is van een adaptief proces? In dit artikel zet ik de argumenten van Pinker en Bloom voor de eerste stelling tegenover argumenten van Chomsky en Gould voor de tweede stelling. Vervolgens laat ik zien dat deze twee extreme standpunten ruimte bieden voor andere opties, die nader onderzoek waard zijn. Zo kan genetisch onderzoek in de komende decennia informatie opleveren, die nuancering van beide standpunten noodzakelijk maakt.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.

    Abstract

    Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., & Verhoeven, L. (2012). The nature of rhyme processing in preliterate children. British Journal of Educational Psychology, 82, 672-689. doi:10.1111/j.2044-8279.2011.02055.x.

    Abstract

    Background. Rhyme awareness is one of the earliest forms of phonological awareness to develop and is assessed in many developmental studies by means of a simple rhyme task. The influence of more demanding experimental paradigms on rhyme judgment performance is often neglected. Addressing this issue may also shed light on whether rhyme processing is more global or analytical in nature. Aims. The aim of the present study was to examine whether lexical status and global similarity relations influenced rhyme judgments in kindergarten children and if so, if there is an interaction between these two factors. Sample. Participants were 41 monolingual Dutch-speaking preliterate kindergartners (average age 6.0 years) who had not yet received any formal reading education. Method. To examine the effects of lexical status and phonological similarity processing, the kindergartners were asked to make rhyme judgements on (pseudo) word targets that rhymed, phonologically overlapped or were unrelated to (pseudo) word primes. Results. Both a lexicality effect (pseudo-words were more difficult than words) and a global similarity effect (globally similar non-rhyming items were more difficult to reject than unrelated items) were observed. In addition, whereas in words the global similarity effect was only present in accuracy outcomes, in pseudo-words it was also observed in the response latencies. Furthermore, a large global similarity effect in pseudo-words correlated with a low score on short-term memory skills and grapheme knowledge. Conclusions. Increasing task demands led to a more detailed assessment of rhyme processing skills. Current assessment paradigms should therefore be extended with more demanding conditions. In light of the views on rhyme processing, we propose that a combination of global and analytical strategies is used to make a correct rhyme judgment.
  • Wagner, A., & Ernestus, M. (2008). Identification of phonemes: Differences between phoneme classes and the effect of class size. Phonetica, 65(1-2), 106-127. doi:10.1159/000132389.

    Abstract

    This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners' response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme's class in the listener's native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
  • Walker, R. M., Hill, A. E., Newman, A. C., Hamilton, G., Torrance, H. S., Anderson, S. M., Ogawa, F., Derizioti, P., Nicod, J., Vernes, S. C., Fisher, S. E., Thomson, P. A., Porteous, D. J., & Evans, K. L. (2012). The DISC1 promoter: Characterization and regulation by FOXP2. Human Molecular Genetics, 21, 2862-2872. doi:10.1093/hmg/dds111.

    Abstract

    Disrupted in schizophrenia 1 (DISC1) is a leading candidate susceptibility gene for schizophrenia, bipolar disorder, and recurrent major depression, which has been implicated in other psychiatric illnesses of neurodevelopmental origin, including autism. DISC1 was initially identified at the breakpoint of a balanced chromosomal translocation, t(1;11) (q42.1;14.3), in a family with a high incidence of psychiatric illness. Carriers of the translocation show a 50% reduction in DISC1 protein levels, suggesting altered DISC1 expression as a pathogenic mechanism in psychiatric illness. Altered DISC1 expression in the post-mortem brains of individuals with psychiatric illness and the frequent implication of non-coding regions of the gene by association analysis further support this assertion. Here, we provide the first characterisation of the DISC1 promoter region. Using dual luciferase assays, we demonstrate that a region -300bp to -177bp relative to the transcription start site (TSS) contributes positively to DISC1 promoter activity, whilst a region -982bp to -301bp relative to the TSS confers a repressive effect. We further demonstrate inhibition of DISC1 promoter activity and protein expression by FOXP2, a transcription factor implicated in speech and language function. This inhibition is diminished by two distinct FOXP2 point mutations, R553H and R328X, which were previously found in families affected by developmental verbal dyspraxia (DVD). Our work identifies an intriguing mechanistic link between neurodevelopmental disorders that have traditionally been viewed as diagnostically distinct but which do share varying degrees of phenotypic overlap.
  • Waller, D., Loomis, J. M., & Haun, D. B. M. (2004). Body-based senses enhance knowledge of directions in large-scale environments. Psychonomic Bulletin & Review, 11(1), 157-163.

    Abstract

    Previous research has shown that inertial cues resulting from passive transport through a large environment do not necessarily facilitate acquiring knowledge about its layout. Here we examine whether the additional body-based cues that result from active movement facilitate the acquisition of spatial knowledge. Three groups of participants learned locations along an 840-m route. One group walked the route during learning, allowing access to body-based cues (i.e., vestibular, proprioceptive, and efferent information). Another group learned by sitting in the laboratory, watching videos made from the first group. A third group watched a specially made video that minimized potentially confusing head-on-trunk rotations of the viewpoint. All groups were tested on their knowledge of directions in the environment as well as on its configural properties. Having access to body-based information reduced pointing error by a small but significant amount. Regardless of the sensory information available during learning, participants exhibited strikingly common biases.
  • Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.

    Abstract

    The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.

    Abstract

    Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion.
  • Wang, L., Zhu, Z., & Bastiaansen, M. C. M. (2012). Integration or predictability? A further specification of the functional role of gamma oscillations in language comprehension. Frontiers in Psychology, 3, 187. doi:10.3389/fpsyg.2012.00187.

    Abstract

    Gamma-band neuronal synchronization during sentence-level language comprehension has previously been linked with semantic unification. Here, we attempt to further narrow down the functional significance of gamma during language comprehension, by distinguishing between two aspects of semantic unification: successful integration of word meaning into the sentence context, and prediction of upcoming words. We computed event-related potentials (ERPs) and frequency band-specific electroencephalographic (EEG) power changes while participants read sentences that contained a critical word (CW) that was (1) both semantically congruent and predictable (high cloze, HC), (2) semantically congruent but unpredictable (low cloze, LC), or (3) semantically incongruent (and therefore also unpredictable; semantic violation, SV). The ERP analysis showed the expected parametric N400 modulation (HC < LC < SV). The time-frequency analysis showed qualitatively different results. In the gamma-frequency range, we observed a power increase in response to the CW in the HC condition, but not in the LC and the SV conditions. Additionally, in the theta frequency range we observed a power increase in the SV condition only. Our data provide evidence that gamma power increases are related to the predictability of an upcoming word based on the preceding sentence context, rather than to the integration of the incoming word’s semantics into the preceding context. Further, our theta band data are compatible with the notion that theta band synchronization in sentence comprehension might be related to the detection of an error in the language input.
  • Wang, M., Shao, Z., Verdonschot, R. G., Chen, Y., & Schiller, N. O. (2023). Orthography influences spoken word production in blocked cyclic naming. Psychonomic Bulletin & Review, 30, 383-392. doi:10.3758/s13423-022-02123-y.

    Abstract

    Does the way a word is written influence its spoken production? Previous studies suggest that orthography is involved only when the orthographic representation is highly relevant during speaking (e.g., in reading-aloud tasks). To address this issue, we carried out two experiments using the blocked cyclic picture-naming paradigm. In both experiments, participants were asked to name pictures repeatedly in orthographically homogeneous or heterogeneous blocks. In the naming task, the written form was not shown; however, the radical of the first character overlapped between the four pictures in this block type. A facilitative orthographic effect was found when picture names shared part of their written forms, compared with the heterogeneous condition. This facilitative effect was independent of the position of orthographic overlap (i.e., the left, the lower, or the outer part of the character). These findings strongly suggest that orthography can influence speaking even when it is not highly relevant (i.e., during picture naming) and the orthographic effect is less likely to be attributed to strategic preparation.
  • Warner, N., Jongman, A., Sereno, J., & Kemps, R. J. J. K. (2004). Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics, 32(2), 251-276. doi:10.1016/S0095-4470(03)00032-9.

    Abstract

    Words which are expected to contain the same surface string of segments may, under identical prosodic circumstances, sometimes be realized with slight differences in duration. Some researchers have attributed such effects to differences in the words’ underlying forms (incomplete neutralization), while others have suggested orthographic influence and extremely careful speech as the cause. In this paper, we demonstrate such sub-phonemic durational differences in Dutch, a language which some past research has found not to have such effects. Past literature has also shown that listeners can often make use of incomplete neutralization to distinguish apparent homophones. We extend perceptual investigations of this topic, and show that listeners can perceive even durational differences which are not consistently observed in production. We further show that a difference which is primarily orthographic rather than underlying can also create such durational differences. We conclude that a wide variety of factors, in addition to underlying form, can induce speakers to produce slight durational differences which listeners can also use in perception.
  • Wassenaar, M., Brown, C. M., & Hagoort, P. (2004). ERP-effects of subject-verb agreement violations in patients with Broca's aphasia. Journal of Cognitive Neuroscience, 16(4), 553-576. doi:10.1162/089892904323057290.

    Abstract

    This article presents electrophysiological data on on-line syntactic processing during auditory sentence comprehension in patients with Broca's aphasia. Event-related brain potentials (ERPs) were recorded from the scalp while subjects listened to sentences that were either syntactically correct or contained violations of subject-verb agreement. Three groups of subjects were tested: Broca patients (n = 10), nonaphasic patients with a right-hemisphere (RH) lesion (n = 5), and healthy agedmatched controls (n = 12). The healthy, control subjects showed a P600/SPS effect as response to the agreement violations. The nonaphasic patients with an RH lesion showed essentially the same pattern. The overall group of Broca patients did not show this sensitivity. However, the sensitivity was modulated by the severity of the syntactic comprehension impairment. The largest deviation from the standard P600/SPS effect was found in the patients with the relatively more severe syntactic comprehension impairment. In addition, ERPs to tones in a classical tone oddball paradigm were also recorded. Similar to the normal control subjects and RH patients, the group of Broca patients showed a P300 effect in the tone oddball condition. This indicates that aphasia in itself does not lead to a general reduction in all cognitive ERP effects. It was concluded that deviations from the standard P600/SPS effect in the Broca patients reflected difficulties with on-line maintaining of number information across clausal boundaries for establishing subject-verb agreement.
  • Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal of Memory and Language, 50(1), 1-25. doi:10.1016/S0749-596X(03)00105-0.

    Abstract

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, ‘lid,’ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
  • Weber, A., & Scharenborg, O. (2012). Models of spoken-word recognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 387-401. doi:10.1002/wcs.1178.

    Abstract

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012
  • Weber, A., & Crocker, M. W. (2012). On the nature of semantic constraints on lexical access. Journal of Psycholinguistic Research, 41, 195-214. doi:10.1007/s10936-011-9184-0.

    Abstract

    We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, ‘flower’) when instructed to click on low-frequency targets (e.g., Bluse, ‘blouse’). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
  • Weber, K., & Lavric, A. (2008). Syntactic anomaly elicits a lexico-semantic (N400) ERP effect in the second but not in the first language. Psychophysiology, 45(6), 920-925. doi:10.1111/j.1469-8986.2008.00691.x.

    Abstract

    Recent brain potential research into first versus second language (L1 vs. L2) processing revealed striking responses to morphosyntactic features absent in the mother tongue. The aim of the present study was to establish whether the presence of comparable morphosyntactic features in L1 leads to more similar electrophysiological L1 and L2 profiles. ERPs were acquired while German-English bilinguals and native speakers of English read sentences. Some sentences were meaningful and well formed, whereas others contained morphosyntactic or semantic violations in the final word. In addition to the expected P600 component, morphosyntactic violations in L2 but not L1 led to an enhanced N400. This effect may suggest either that resolution of morphosyntactic anomalies in L2 relies on the lexico-semantic system or that the weaker/slower morphological mechanisms in L2 lead to greater sentence wrap-up difficulties known to result in N400 enhancement.
  • Whelan, L., Dockery, A., Stephenson, K. A. J., Zhu, J., Kopčić, E., Post, I. J. M., Khan, M., Corradi, Z., Wynne, N., O’ Byrne, J. J., Duignan, E., Silvestri, G., Roosing, S., Cremers, F. P. M., Keegan, D. J., Kenna, P. F., & Farrar, G. J. (2023). Detailed analysis of an enriched deep intronic ABCA4 variant in Irish Stargardt disease patients. Scientific Reports, 13: 9380. doi:10.1038/s41598-023-35889-9.

    Abstract

    Over 15% of probands in a large cohort of more than 1500 inherited retinal degeneration patients present with a clinical diagnosis of Stargardt disease (STGD1), a recessive form of macular dystrophy caused by biallelic variants in the ABCA4 gene. Participants were clinically examined and underwent either target capture sequencing of the exons and some pathogenic intronic regions of ABCA4, sequencing of the entire ABCA4 gene or whole genome sequencing. ABCA4 c.4539 + 2028C > T, p.[= ,Arg1514Leufs*36] is a pathogenic deep intronic variant that results in a retina-specific 345-nucleotide pseudoexon inclusion. Through analysis of the Irish STGD1 cohort, 25 individuals across 18 pedigrees harbour ABCA4 c.4539 + 2028C > T and another pathogenic variant. This includes, to the best of our knowledge, the only two homozygous patients identified to date. This provides important evidence of variant pathogenicity for this deep intronic variant, highlighting the value of homozygotes for variant interpretation. 15 other heterozygous incidents of this variant in patients have been reported globally, indicating significant enrichment in the Irish population. We provide detailed genetic and clinical characterization of these patients, illustrating that ABCA4 c.4539 + 2028C > T is a variant of mild to intermediate severity. These results have important implications for unresolved STGD1 patients globally with approximately 10% of the population in some western countries claiming Irish heritage. This study exemplifies that detection and characterization of founder variants is a diagnostic imperative.

    Additional information

    supplemental material
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2012). Corrigendum to CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 11, 501. doi:10.1111/j.1601-183X.2012.00806.x.

    Abstract

    Corrigendum to CNTNAP2 variants affect early language development in the general population A. J. O. Whitehouse, D. V. M. Bishop, Q. W. Ang, C. E. Pennell and S. E. Fisher Genes Brain Behav (2011) doi: 10.1111/j.1601-183X.2011.00684.x. The authors have detected a typographical error in the Abstract of this paper. The error is in the fifth sentence, which reads: ‘‘On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG,P = .0014).’’ Rather than ‘‘GCAG’’, the final haplotype should read ‘‘CGAG’’. This typographical error was made in the Abstract only and this has no bearing on the results or conclusions of the study, which remain unchanged. Reference Whitehouse, A. J. O., Bishop, D. V. M., Ang, Q. W., Pennell, C. E. & Fisher, S. E. (2011) CNTNAP2 variants affect early language development in the general population. Genes Brain Behav 10, 451–456. doi: 10.1111/j.1601-183X.2011.00684.x.
  • Whitehouse, H., & Cohen, E. (2012). Seeking a rapprochement between anthropology and the cognitive sciences: A problem-driven approach. Topics in Cognitive Science, 4, 404-412. doi:10.1111/j.1756-8765.2012.01203.x.

    Abstract

    Beller, Bender, and Medin question the necessity of including social anthropology within the cognitive sciences. We argue that there is great scope for fruitful rapprochement while agreeing that there are obstacles (even if we might wish to debate some of those specifically identified by Beller and colleagues). We frame the general problem differently, however: not in terms of the problem of reconciling disciplines and research cultures, but rather in terms of the prospects for collaborative deployment of expertise (methodological and theoretical) in problem-driven research. For the purposes of illustration, our focus in this article is on the evolution of cooperation
  • Widlok, T. (2004). Ethnography in language Documentation. Language Archive Newsletter, 1(3), 4-6.
  • Widlok, T. (2008). Landscape unbounded: Space, place, and orientation in ≠Akhoe Hai// om and beyond. Language Sciences, 30(2/3), 362-380. doi:10.1016/j.langsci.2006.12.002.

    Abstract

    Even before it became a common place to assume that “the Eskimo have a hundred words for snow” the languages of hunting and gathering people have played an important role in debates about linguistic relativity concerning geographical ontologies. Evidence from languages of hunter-gatherers has been used in radical relativist challenges to the overall notion of a comparative typology of generic natural forms and landscapes as terms of reference. It has been invoked to emphasize a personalized relationship between humans and the non-human world. It is against this background that this contribution discusses the landscape terminology of ≠Akhoe Hai//om, a Khoisan language spoken by “Bushmen” in Namibia. Landscape vocabulary is ubiquitous in ≠Akhoe Hai//om due to the fact that the landscape plays a critical role in directionals and other forms of “topographical gossip” and due to merges between landscape and group terminology. This system of landscape-cum-group terminology is outlined and related to the use of place names in the area.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., & Francken, J. C. (2012). Embodied cognition: Taking the next step. Frontiers in Psychology, 3, 582. doi:10.3389/fpsyg.2012.00582.

    Abstract

    Recent years have seen a large amount of empirical studies related to ‘embodied cognition’. While interesting and valuable, there is something dissatisfying with the current state of affairs in this research domain. Hypotheses tend to be underspecified, testing in general terms for embodied versus disembodied processing. The lack of specificity of current hypotheses can easily lead to an erosion of the embodiment concept, and result in a situation in which essentially any effect is taken as positive evidence. Such erosion is not helpful to the field and does not do justice to the importance of embodiment. Here we want to take stock, and formulate directions for how it can be studied in a more fruitful fashion. As an example we will describe few example studies that have investigated the role of sensori-motor systems in the coding of meaning (‘embodied semantics’). Instead of focusing on the dichotomy between embodied and disembodied theories, we suggest that the field move forward and ask how and when sensori-motor systems and behavior are involved in cognition.
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Williams, N. M., Williams, H., Majounie, E., Norton, N., Glaser, B., Morris, H. R., Owen, M. J., & O'Donovan, M. C. (2008). Analysis of copy number variation using quantitative interspecies competitive PCR. Nucleic Acids Research, 36(17): e112. doi:10.1093/nar/gkn495.

    Abstract

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P. (2008). Die CLARIN Forschungsinfrastruktur. ÖGAI-journal (Österreichische Gesellschaft für Artificial Intelligence), 27, 10-17.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.

    Abstract

    We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • You, W., Zhang, Q., & Verdonschot, R. G. (2012). Masked syllable priming effects in word and picture naming in Chinese. PLoS One, 7(10): e46595. doi:10.1371/journal.pone.0046595.

    Abstract

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e. g., /ba2.ying2/, strike camp) and CVG (e. g., /bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e. g., CV (/qi4.qiu2/, balloon) and CVN (/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e. g., /qi3/, attempt), or a CVN (e. g., /qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e. g., /mi2.ni3/, mini) and CVN-CX (e. g., /min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2023). The role of multimodal cues in second language comprehension. Scientific Reports, 13: 20824. doi:10.1038/s41598-023-47643-2.

    Abstract

    In face-to-face communication, multimodal cues such as prosody, gestures, and mouth movements can play a crucial role in language processing. While several studies have addressed how these cues contribute to native (L1) language processing, their impact on non-native (L2) comprehension is largely unknown. Comprehension of naturalistic language by L2 comprehenders may be supported by the presence of (at least some) multimodal cues, as these provide correlated and convergent information that may aid linguistic processing. However, it is also the case that multimodal cues may be less used by L2 comprehenders because linguistic processing is more demanding than for L1 comprehenders, leaving more limited resources for the processing of multimodal cues. In this study, we investigated how L2 comprehenders use multimodal cues in naturalistic stimuli (while participants watched videos of a speaker), as measured by electrophysiological responses (N400) to words, and whether there are differences between L1 and L2 comprehenders. We found that prosody, gestures, and informative mouth movements each reduced the N400 in L2, indexing easier comprehension. Nevertheless, L2 participants showed weaker effects for each cue compared to L1 comprehenders, with the exception of meaningful gestures and informative mouth movements. These results show that L2 comprehenders focus on specific multimodal cues – meaningful gestures that support meaningful interpretation and mouth movements that enhance the acoustic signal – while using multimodal cues to a lesser extent than L1 comprehenders overall.

    Additional information

    supplementary materials
  • Wu, S., Zhao, J., de Villiers, J., Liu, X. L., Rolfhus, E., Sun, X. N., Li, X. Y., Pan, H., Wang, H. W., Zhu, Q., Dong, Y. Y., Zhang, Y. T., & Jiang, F. (2023). Prevalence, co-occurring difficulties, and risk factors of developmental language disorder: First evidence for Mandarin-speaking children in a population-based study. The Lancet Regional Health - Western Pacific, 34: 100713. doi:10.1016/j.lanwpc.2023.100713.

    Abstract

    Background: Developmental language disorder (DLD) is a condition that significantly affects children's achievement but has been understudied. We aim to estimate the prevalence of DLD in Shanghai, compare the co-occurrence of difficulties between children with DLD and those with typical development (TD), and investigate the early risk factors for DLD.

    Methods: We estimated DLD prevalence using data from a population-based survey with a cluster random sampling design in Shanghai, China. A subsample of children (aged 5-6 years) received an onsite evaluation, and each child was categorized as TD or DLD. The proportions of children with socio-emotional behavior (SEB) difficulties, low non-verbal IQ (NVIQ), and poor school readiness were calculated among children with TD and DLD. We used multiple imputation to address the missing values of risk factors. Univariate and multivariate regression models adjusted with sampling weights were used to estimate the correlation of each risk factor with DLD.

    Findings: Of 1082 children who were approached for the onsite evaluation, 974 (90.0%) completed the language ability assessments, of whom 74 met the criteria for DLD, resulting in a prevalence of 8.5% (95% CI 6.3-11.5) when adjusted with sampling weights. Compared with TD children, children with DLD had higher rates of concurrent difficulties, including SEB (total difficulties score at-risk: 156 [17.3%] of 900 TD vs. 28 [37.8%] of 74 DLD, p < 0.0001), low NVIQ (3 [0.3%] of 900 TD vs. 8 [10.8%] of 74 DLD, p < 0.0001), and poor school readiness (71 [7.9%] of 900 TD vs. 13 [17.6%] of 74 DLD, p = 0.0040). After accounting for all other risk factors, a higher risk of DLD was associated with a lack of parent-child interaction diversity (adjusted odds ratio [aOR] = 3.08, 95% CI = 1.29-7.37; p = 0.012) and lower kindergarten levels (compared to demonstration and first level: third level (aOR = 6.15, 95% CI = 1.92-19.63; p = 0.0020)).

    Interpretation: The prevalence of DLD and its co-occurrence with other difficulties suggest the need for further attention. Family and kindergarten factors were found to contribute to DLD, suggesting that multi-sector coordinated efforts are needed to better identify and serve DLD populations at home, in schools, and in clinical settings.

    Funding: The study was supported by Shanghai Municipal Education Commission (No. 2022you1-2, D1502), the Innovative Research Team of High-level Local Universities in Shanghai (No. SHSMU-ZDCX20211900), Shanghai Municipal Health Commission (No.GWV-10.1-XK07), and the National Key Research and Development Program of China (No. 2022YFC2705201).
  • Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.

    Abstract

    Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.

    Additional information

    Zhu_2012_suppl.dot
  • Zioga, I., Weissbart, H., Lewis, A. G., Haegens, S., & Martin, A. E. (2023). Naturalistic spoken language comprehension is supported by alpha and beta oscillations. The Journal of Neuroscience, 43(20), 3718-3732. doi:10.1523/JNEUROSCI.1500-22.2023.

    Abstract

    Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.
  • Zora, H., Wester, J. M., & Csépe, V. (2023). Predictions about prosody facilitate lexical access: Evidence from P50/N100 and MMN components. International Journal of Psychophysiology, 194: 112262. doi:10.1016/j.ijpsycho.2023.112262.

    Abstract

    Research into the neural foundation of perception asserts a model where top-down predictions modulate the bottom-up processing of sensory input. Despite becoming increasingly influential in cognitive neuroscience, the precise account of this predictive coding framework remains debated. In this study, we aim to contribute to this debate by investigating how predictions about prosody facilitate speech perception, and to shed light especially on lexical access influenced by simultaneous predictions in different domains, inter alia, prosodic and semantic. Using a passive auditory oddball paradigm, we examined neural responses to prosodic changes, leading to a semantic change as in Dutch nouns canon [ˈkaːnɔn] ‘cannon’ vs kanon [kaːˈnɔn] ‘canon’, and used acoustically identical pseudowords as controls. Results from twenty-eight native speakers of Dutch (age range 18–32 years) indicated an enhanced P50/N100 complex to prosodic change in pseudowords as well as an MMN response to both words and pseudowords. The enhanced P50/N100 response to pseudowords is claimed to indicate that all relevant auditory information is still processed by the brain, whereas the reduced response to words might reflect the suppression of information that has already been encoded. The MMN response to pseudowords and words, on the other hand, is best justified by the unification of previously established prosodic representations with sensory and semantic input respectively. This pattern of results is in line with the predictive coding framework acting on multiple levels and is of crucial importance to indicate that predictions about linguistic prosodic information are utilized by the brain as early as 50 ms.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2023). In conversation, answers are remembered better than the questions themselves. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(12), 1971-1988. doi:10.1037/xlm0001292.

    Abstract

    Language is used in communicative contexts to identify and successfully transmit new information that should be later remembered. In three studies, we used question–answer pairs, a naturalistic device for focusing information, to examine how properties of conversations inform later item memory. In Experiment 1, participants viewed three pictures while listening to a recorded question–answer exchange between two people about the locations of two of the displayed pictures. In a memory recognition test conducted online a day later, participants recognized the names of pictures that served as answers more accurately than the names of pictures that appeared as questions. This suggests that this type of focus indeed boosts memory. In Experiment 2, participants listened to the same items embedded in declarative sentences. There was a reduced memory benefit for the second item, confirming the role of linguistic focus on later memory beyond a simple serial-position effect. In Experiment 3, two participants asked and answered the same questions about objects in a dialogue. Here, answers continued to receive a memory benefit, and this focus effect was accentuated by language production such that information-seekers remembered the answers to their questions better than information-givers remembered the questions they had been asked. Combined, these studies show how people’s memory for conversation is modulated by the referential status of the items mentioned and by the speaker’s roles of the conversation participants.
  • Zwaan, R. A., Van der Stoep, N., Guadalupe, T., & Bouwmeester, S. (2012). Language comprehension in the balance: The robustness of the action-compatibility effect (ACE). PLoS One, 7(2), e31204. doi:10.1371/journal.pone.0031204.

    Abstract

    How does language comprehension interact with motor activity? We investigated the conditions under which comprehending an action sentence affects people's balance. We performed two experiments to assess whether sentences describing forward or backward movement modulate the lateral movements made by subjects who made sensibility judgments about the sentences. In one experiment subjects were standing on a balance board and in the other they were seated on a balance board that was mounted on a chair. This allowed us to investigate whether the action compatibility effect (ACE) is robust and persists in the face of salient incompatibilities between sentence content and subject movement. Growth-curve analysis of the movement trajectories produced by the subjects in response to the sentences suggests that the ACE is indeed robust. Sentence content influenced movement trajectory despite salient inconsistencies between implied and actual movement. These results are interpreted in the context of the current discussion of embodied, or grounded, language comprehension and meaning representation.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.

    Abstract

    This paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.

Share this page