Publications

Displaying 601 - 658 of 658
  • Van Berkum, J. J. A., De Goede, D., Van Alphen, P. M., Mulder, E. R., & Kerstholt, J. H. (2013). How robust is the language architecture? The case of mood. Frontiers in Psychology, 4: 505. doi:10.3389/fpsyg.2013.00505.

    Abstract

    In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that “David praised Linda because … ” would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., “The boys turns”). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying “cold” cognitive functions in relation to “hot” aspects of the brain.
  • Van Berkum, J. J. A., & De Jong, T. (1991). Instructional environments for simulations. Education & Computing, 6(3/4), 305-358.

    Abstract

    The use of computer simulations in education and training can have substantial advantages over other approaches. In comparison with alternatives such as textbooks, lectures, and tutorial courseware, a simulation-based approach offers the opportunity to learn in a relatively realistic problem-solving context, to practise task performance without stress, to systematically explore both realistic and hypothetical situations, to change the time-scale of events, and to interact with simplified versions of the process or system being simulated. However, learners are often unable to cope with the freedom offered by, and the complexity of, a simulation. As a result many of them resort to an unsystematic, unproductive mode of exploration. There is evidence that simulation-based learning can be improved if the learner is supported while working with the simulation. Constructing such an instructional environment around simulations seems to run counter to the freedom the learner is allowed to in ‘stand alone’ simulations. The present article explores instructional measures that allow for an optimal freedom for the learner. An extensive discussion of learning goals brings two main types of learning goals to the fore: conceptual knowledge and operational knowledge. A third type of learning goal refers to the knowledge acquisition (exploratory learning) process. Cognitive theory has implications for the design of instructional environments around simulations. Most of these implications are quite general, but they can also be related to the three types of learning goals. For conceptual knowledge the sequence and choice of models and problems is important, as is providing the learner with explanations and minimization of error. For operational knowledge cognitive theory recommends learning to take place in a problem solving context, the explicit tracing of the behaviour of the learner, providing immediate feedback and minimization of working memory load. For knowledge acquisition goals, it is recommended that the tutor takes the role of a model and coach, and that learning takes place together with a companion. A second source of inspiration for designing instructional environments can be found in Instructional Design Theories. Reviewing these shows that interacting with a simulation can be a part of a more comprehensive instructional strategy, in which for example also prerequisite knowledge is taught. Moreover, information present in a simulation can also be represented in a more structural or static way and these two forms of presentation provoked to perform specific learning processes and learner activities by tutor controlled variations in the simulation, and by tutor initiated prodding techniques. And finally, instructional design theories showed that complex models and procedures can be taught by starting with central and simple elements of these models and procedures and subsequently presenting more complex models and procedures. Most of the recent simulation-based intelligent tutoring systems involve troubleshooting of complex technical systems. Learners are supposed to acquire knowledge of particular system principles, of troubleshooting procedures, or of both. Commonly encountered instructional features include (a) the sequencing of increasingly complex problems to be solved, (b) the availability of a range of help information on request, (c) the presence of an expert troubleshooting module which can step in to provide criticism on learner performance, hints on the problem nature, or suggestions on how to proceed, (d) the option of having the expert module demonstrate optimal performance afterwards, and (e) the use of different ways of depicting the simulated system. A selection of findings is summarized by placing them under the four themes we think to be characteristic of learning with computer simulations (see de Jong, this volume).
  • Van der Zande, P., Jesse, A., & Cutler, A. (2013). Lexically guided retuning of visual phonetic categories. Journal of the Acoustical Society of America, 134, 562-571. doi:10.1121/1.4807814.

    Abstract

    Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
  • Van Leeuwen, E. J. C., Cronin, K. A., Haun, D. B. M., Mundry, R., & Bodamer, M. D. (2012). Neighbouring chimpanzee communities show different preferences in social grooming behaviour. Proceedings of the Royal Society B: Biological Sciences, 279, 4362-4367. doi:10.1098/rspb.2012.1543.

    Abstract

    Grooming handclasp (GHC) behaviour was originally advocated as the first evidence of social culture in chimpanzees owing to the finding that some populations engaged in the behaviour and others do not. To date, however, the validity of this claim and the extent to which this social behaviour varies between groups is unclear. Here, we measured (i) variation, (ii) durability and (iii) expansion of the GHC behaviour in four chimpanzee communities that do not systematically differ in their genetic backgrounds and live in similar ecological environments. Ninety chimpanzees were studied for a total of 1029 h; 1394 GHC bouts were observed between 2010 and 2012. Critically, GHC style (defined by points of bodily contact) could be systematically linked to the chimpanzee’s group identity, showed temporal consistency both withinand between-groups, and could not be accounted for by the arm-length differential between partners. GHC has been part of the behavioural repertoire of the chimpanzees under study for more than 9 years (surpassing durability criterion) and spread across generations (surpassing expansion criterion). These results strongly indicate that chimpanzees’ social behaviour is not only motivated by innate predispositions and individual inclinations, but may also be partly cultural in nature.
  • Van Leeuwen, T. M., Hagoort, P., & Händel, B. F. (2013). Real color captures attention and overrides spatial cues in grapheme-color synesthetes but not in controls. Neuropsychologia, 51(10), 1802-1813. doi:10.1016/j.neuropsychologia.2013.06.024.

    Abstract

    Grapheme-color synesthetes perceive color when reading letters or digits. We investigated oscillatory brain signals of synesthetes vs. controls using magnetoencephalography. Brain oscillations specifically in the alpha band (∼10 Hz) have two interesting features: alpha has been linked to inhibitory processes and can act as a marker for attention. The possible role of reduced inhibition as an underlying cause of synesthesia, as well as the precise role of attention in synesthesia is widely discussed. To assess alpha power effects due to synesthesia, synesthetes as well as matched controls viewed synesthesia-inducing graphemes, colored control graphemes, and non-colored control graphemes while brain activity was recorded. Subjects had to report a color change at the end of each trial which allowed us to assess the strength of synesthesia in each synesthete. Since color (synesthetic or real) might allocate attention we also included an attentional cue in our paradigm which could direct covert attention. In controls the attentional cue always caused a lateralization of alpha power with a contralateral decrease and ipsilateral alpha increase over occipital sensors. In synesthetes, however, the influence of the cue was overruled by color: independent of the attentional cue, alpha power decreased contralateral to the color (synesthetic or real). This indicates that in synesthetes color guides attention. This was confirmed by reaction time effects due to color, i.e. faster RTs for the color side independent of the cue. Finally, the stronger the observed color dependent alpha lateralization, the stronger was the manifestation of synesthesia as measured by congruency effects of synesthetic colors on RTs. Behavioral and imaging results indicate that color induces a location-specific, automatic shift of attention towards color in synesthetes but not in controls. We hypothesize that this mechanism can facilitate coupling of grapheme and color during the development of synesthesia.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657-671. doi:10.1162/089892999563724.

    Abstract

    In two ERP experiments we investigated how and when the language comprehension system relates an incoming word to semantic representations of an unfolding local sentence and a wider discourse. In experiment 1, subjects were presented with short stories. The last sentence of these stories occasionally contained a critical word that, although acceptable in the local sentence context, was semantically anomalous with respect to the wider discourse (e.g., "Jane told the brother that he was exceptionally slow" in a discourse context where he had in fact been very quick). Relative to coherent control words (e.g., "quick"), these discourse-dependent semantic anomalies elicited a large N400 effect that began at about 200-250 ms after word onset. In experiment 2, the same sentences were presented without their original story context. Although the words that had previously been anomalous in discourse still elicited a slightly larger average N400 than the coherent words, the resulting N400 effect was much reduced, showing that the large effect observed in stories was related to the wider discourse. In the same experiment, single sentences that contained a clear local semantic anomaly elicited a standard sentence-dependent N400 effect (e.g., Kutas & Hillyard, 1980). The N400 effects elicited in discourse and in single sentences had the same time course, overall morphology, and scalp distribution. We argue that these findings are most compatible with models of language processing in which there is no fundamental distinction between the integration of a word in its local (sentence-level) and its global (discourse-level) semantic context.
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2012). Semantic involvement of initial and final lexical embeddings during sense-making: The advantage of starting late. Frontiers in Psychology, 3, 190. doi:10.3389/fpsyg.2012.00190.

    Abstract

    During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like 'day' in 'daisy', or 'dean' in 'sardine'. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding ('day' in 'daisy') did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding ('dean' in 'sardine') did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.
  • Van Ackeren, M. J., Casasanto, D., Bekkering, H., Hagoort, P., & Rueschemeyer, S.-A. (2012). Pragmatics in action: Indirect requests engage theory of mind areas and the cortical motor network. Journal of Cognitive Neuroscience, 24, 2237-2247. doi:10.1162/jocn_a_00274.

    Abstract

    Research from the past decade has shown that understanding the meaning of words and utterances (i.e., abstracted symbols) engages the same systems we used to perceive and interact with the physical world in a content-specific manner. For example, understanding the word “grasp” elicits activation in the cortical motor network, that is, part of the neural substrate involved in planned and executing a grasping action. In the embodied literature, cortical motor activation during language comprehension is thought to reflect motor simulation underlying conceptual knowledge [note that outside the embodied framework, other explanations for the link between action and language are offered, e.g., Mahon, B. Z., & Caramazza, A. A critical look at the embodied cognition hypothesis and a new proposal for grouding conceptual content. Journal of Physiology, 102, 59–70, 2008; Hagoort, P. On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9, 416–423, 2005]. Previous research has supported the view that the coupling between language and action is flexible, and reading an action-related word form is not sufficient for cortical motor activation [Van Dam, W. O., van Dijk, M., Bekkering, H., & Rueschemeyer, S.-A. Flexibility in embodied lexical–semantic representations. Human Brain Mapping, doi: 10.1002/hbm.21365, 2011]. The current study goes one step further by addressing the necessity of action-related word forms for motor activation during language comprehension. Subjects listened to indirect requests (IRs) for action during an fMRI session. IRs for action are speech acts in which access to an action concept is required, although it is not explicitly encoded in the language. For example, the utterance “It is hot here!” in a room with a window is likely to be interpreted as a request to open the window. However, the same utterance in a desert will be interpreted as a statement. The results indicate (1) that comprehension of IR sentences activates cortical motor areas reliably more than comprehension of sentences devoid of any implicit motor information. This is true despite the fact that IR sentences contain no lexical reference to action. (2) Comprehension of IR sentences also reliably activates substantial portions of the theory of mind network, known to be involved in making inferences about mental states of others. The implications of these findings for embodied theories of language are discussed.
  • Van de Ven, M., Ernestus, M., & Schreuder, R. (2012). Predicting acoustically reduced words in spontaneous speech: The role of semantic/syntactic and acoustic cues in context. Laboratory Phonology, 3, 455-481. doi:10.1515/lp-2012-0020.

    Abstract

    In spontaneous speech, words may be realised shorter than in formal speech (e.g., English yesterday may be pronounced like [jɛʃeɩ]). Previous research has shown that context is required to understand highly reduced pronunciation variants. We investigated the extent to which listeners can predict low predictability reduced words on the basis of the semantic/syntactic and acoustic cues in their context. In four experiments, participants were presented with either the preceding context or the preceding and following context of reduced words, and either heard these fragments of conversational speech, or read their orthographic transcriptions. Participants were asked to predict the missing reduced word on the basis of the context alone, choosing from four plausible options. Participants made use of acoustic cues in the context, although casual speech typically has a high speech rate, and acoustic cues are much more unclear than in careful speech. Moreover, they relied on semantic/syntactic cues. Whenever there was a conflict between acoustic and semantic/syntactic contextual cues, measured as the word's probability given the surrounding words, listeners relied more heavily on acoustic cues. Further, context appeared generally insufficient to predict the reduced words, underpinning the significance of the acoustic characteristics of the reduced words themselves.
  • Van der Veer, G. C., Bagnara, S., & Kempen, G. (1991). Preface. Acta Psychologica, 78, ix. doi:10.1016/0001-6918(91)90002-H.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). When does gender constrain parsing? Evidence from ERPs. Journal of Psycholinguistic Research, 28(5), 555-566. doi:10.1023/A:1023224628266.

    Abstract

    We review the implications of recent ERP evidence for when and how grammatical gender agreement constrains sentence parsing. In some theories of parsing, gender is assumed to immediately and categorically block gender-incongruent phrase structure alternatives from being pursued. In other theories, the parser initially ignores gender altogether. The ERP evidence we discuss suggests an intermediate position, in which grammatical gender does not immediately block gender-incongruent phrase structures from being considered, but is used to dispose of them shortly thereafter.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1999). The time course of grammatical and phonological processing during speaking: evidence from event-related brain potentials. Journal of Psycholinguistic Research, 28(6), 649-676. doi:10.1023/A:1023221028150.

    Abstract

    Motor-related brain potentials were used to examine the time course of grammatical and phonological processes during noun phrase production in Dutch. In the experiments, participants named colored pictures using a no-determiner noun phrase. On half of the trials a syntactic-phonological classification task had to be performed before naming. Depending on the outcome of the classifications, a left or a right push-button response was given (go trials), or no push-button response was given (no-go trials). Lateralized readiness potentials (LRPs) were derived to test whether syntactic and phonological information affected the motor system at separate moments in time. The results showed that when syntactic information determined the response-hand decision, an LRP developed on no-go trials. However, no such effect was observed when phonological information determined response hand. On the basis of the data, it can be estimated that an additional period of at least 40 ms is needed to retrieve a word's initial phoneme once its lemma has been retrieved. These results provide evidence for the view that during speaking, grammatical processing precedes phonological processing in time.
  • Van Berkum, J. J. A. (2012). Zonder gevoel geen taal. Neerlandistiek.nl. Wetenschappelijk tijdschrift voor de Nederlandse taal- en letterkunde, 12(01).

    Abstract

    Geïllustreerde herpublicatie van de oratie, uitgesproken bij het aanvaarden van de leeropdracht Discourse, cognitie en communicatie op 30 september 2011 (Universiteit Utrecht). In tegenstelling tot de oorspronkelijke oratie-tekst bevat deze herpublicatie ook diverse illustraties en links. Daarnaast is er in twee aansluitende artikelen door vakgenoten op gereageerd (zie http://www.neerlandistiek.nl/12.01a/ en http://www.neerlandistiek.nl/12.01b/)
  • Vaughn, C., & Brouwer, S. (2013). Perceptual integration of indexical information in bilingual speech. Proceedings of Meetings on Acoustics, 19: 060208. doi:10.1121/1.4800264.

    Abstract

    The present research examines how different types of indexical information, namely talker information and the language being spoken, are perceptually integrated in bilingual speech. Using a speeded classification paradigm (Garner, 1974), variability in characteristics of the talker (gender in Experiment 1 and specific talker in Experiment 2) and in the language being spoken (Mandarin vs. English) was manipulated. Listeners from two different language backgrounds, English monolinguals and Mandarin-English bilinguals, were asked to classify short, meaningful sentences obtained from different Mandarin-English bilingual talkers on these indexical dimensions. Results for the gender-language classification (Exp. 1) showed a significant, symmetrical interference effect for both listener groups, indicating that gender information and language are processed in an integral manner. For talker-language classification (Exp. 2), language interfered more with talker than vice versa for the English monolinguals, but symmetrical interference was found for the Mandarin-English bilinguals. These results suggest both that talker-specificity is not fully segregated from language-specificity, and that bilinguals exhibit more balanced classification along various indexical dimensions of speech. Currently, follow-up studies investigate this talker-language dependency for bilingual listeners who do not speak Mandarin in order to disentangle the role of bilingualism versus language familiarity.
  • Verdonschot, R. G., La Heij, W., Tamaoka, K., Kiyama, S., You, W.-P., & Schiller, N. O. (2013). The multiple pronunciations of Japanese kanji: A masked priming investigation. Quarterly Journal of Experimental Psychology, 66(10), 2023-2038. doi:10.1080/17470218.2013.773050.

    Abstract

    English words with an inconsistent grapheme-to-phoneme conversion or with more than one pronunciation (homographic heterophones; e.g., lead-/l epsilon d/, /lid/) are read aloud more slowly than matched controls, presumably due to competition processes. In Japanese kanji, the majority of the characters have multiple readings for the same orthographic unit: the native Japanese reading (KUN) and the derived Chinese reading (ON). This leads to the question of whether reading these characters also shows processing costs. Studies examining this issue have provided mixed evidence. The current study addressed the question of whether processing of these kanji characters leads to the simultaneous activation of their KUN and ON reading, This was measured in a direct way in a masked priming paradigm. In addition, we assessed whether the relative frequencies of the KUN and ON pronunciations (dominance ratio, measured in compound words) affect the amount of priming. The results of two experiments showed that: (a) a single kanji, presented as a masked prime, facilitates the reading of the (katakana transcriptions of) their KUN and ON pronunciations; however, (b) this was most consistently found when the dominance ratio was around 50% (no strong dominance towards either pronunciation) and when the dominance was towards the ON reading (high-ON group). When the dominance was towards the KUN reading (high-KUN group), no significant priming for the ON reading was observed. Implications for models of kanji processing are discussed.
  • Verdonschot, R. G., Nakayama, M., Zhang, Q., Tamaoka, K., & Schiller, N. O. (2013). The proximate phonological unit of Chinese-English bilinguals: Proficiency matters. PLoS One, 8(4): e61454. doi:10.1371/journal.pone.0061454.

    Abstract

    An essential step to create phonology according to the language production model by Levelt, Roelofs and Meyer is to assemble phonemes into a metrical frame. However, recently, it has been proposed that different languages may rely on different grain sizes of phonological units to construct phonology. For instance, it has been proposed that, instead of phonemes, Mandarin Chinese uses syllables and Japanese uses moras to fill the metrical frame. In this study, we used a masked priming-naming task to investigate how bilinguals assemble their phonology for each language when the two languages differ in grain size. Highly proficient Mandarin Chinese-English bilinguals showed a significant masked onset priming effect in English (L2), and a significant masked syllabic priming effect in Mandarin Chinese (L1). These results suggest that their proximate unit is phonemic in L2 (English), and that bilinguals may use different phonological units depending on the language that is being processed. Additionally, under some conditions, a significant sub-syllabic priming effect was observed even in Mandarin Chinese, which indicates that L2 phonology exerts influences on L1 target processing as a consequence of having a good command of English.

    Additional information

    English stimuli Chinese stimuli
  • Verdonschot, R. G., Middelburg, R., Lensink, S. E., & Schiller, N. O. (2012). Morphological priming survives a language switch. Cognition, 124(3), 343-349. doi:10.1016/j.cognition.2012.05.019.

    Abstract

    In a long-lag morphological priming experiment, Dutch (L1)-English (L2) bilinguals were asked to name pictures and read aloud words. A design using non-switch blocks, consisting solely of Dutch stimuli, and switch-blocks, consisting of Dutch primes and targets with intervening English trials, was administered. Target picture naming was facilitated by morphologically related primes in both non-switch and switch blocks with equal magnitude. These results contrast some assumptions of sustained reactive inhibition models. However, models that do not assume bilinguals having to reactively suppress all activation of the non-target language can account for these data. (C) 2012 Elsevier B.V. All rights reserved.
  • Verga, L., & Kotz, S. A. (2013). How relevant is social interaction in second language learning? Frontiers in Human Neuroscience, 7: 550. doi:10.3389/fnhum.2013.00550.

    Abstract

    Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.
  • Verhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T. and 90 moreVerhoeven, V. J. M., Hysi, P. G., Wojciechowski, R., Fan, Q., Guggenheim, J. A., Höhn, R., MacGregor, S., Hewitt, A. W., Nag, A., Cheng, C.-Y., Yonova-Doing, E., Zhou, X., Ikram, M. K., Buitendijk, G. H. S., McMahon, G., Kemp, J. P., St Pourcain, B., Simpson, C. L., Mäkelä, K.-M., Lehtimäki, T., Kähönen, M., Paterson, A. D., Hosseini, S. M., Wong, H. S., Xu, L., Jonas, J. B., Pärssinen, O., Wedenoja, J., Yip, S. P., Ho, D. W. H., Pang, C. P., Chen, L. J., Burdon, K. P., Craig, J. E., Klein, B. E. K., Klein, R., Haller, T., Metspalu, A., Khor, C.-C., Tai, E.-S., Aung, T., Vithana, E., Tay, W.-T., Barathi, V. A., Chen, P., Li, R., Liao, J., Zheng, Y., Ong, R. T., Döring, A., Evans, D. M., Timpson, N. J., Verkerk, A. J. M. H., Meitinger, T., Raitakari, O., Hawthorne, F., Spector, T. D., Karssen, L. C., Pirastu, M., Murgia, F., Ang, W., Mishra, A., Montgomery, G. W., Pennell, C. E., Cumberland, P. M., Cotlarciuc, I., Mitchell, P., Wang, J. J., Schache, M., Janmahasatian, S., Janmahasathian, S., Igo, R. P., Lass, J. H., Chew, E., Iyengar, S. K., Gorgels, T. G. M. F., Rudan, I., Hayward, C., Wright, A. F., Polasek, O., Vatavuk, Z., Wilson, J. F., Fleck, B., Zeller, T., Mirshahi, A., Müller, C., Uitterlinden, A. G., Rivadeneira, F., Vingerling, J. R., Hofman, A., Oostra, B. A., Amin, N., Bergen, A. A. B., Teo, Y.-Y., Rahi, J. S., Vitart, V., Williams, C., Baird, P. N., Wong, T.-Y., Oexle, K., Pfeiffer, N., Mackey, D. A., Young, T. L., van Duijn, C. M., Saw, S.-M., Bailey-Wilson, J. E., Stambolian, D., Klaver, C. C., Hammond, C. J., Consortium for Refractive Error and Myopia (CREAM), The Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) Research Group, Wellcome Trust Case Control Consortium 2 (WTCCC2), & The Fuchs' Genetics Multi-Center Study Group (2013). Genome-wide meta-analyses of multiancestry cohorts identify multiple new susceptibility loci for refractive error and myopia. Nature Genetics, 45(3), 314-318. doi:10.1038/ng.2554.

    Abstract

    Refractive error is the most common eye disorder worldwide and is a prominent cause of blindness. Myopia affects over 30% of Western populations and up to 80% of Asians. The CREAM consortium conducted genome-wide meta-analyses, including 37,382 individuals from 27 studies of European ancestry and 8,376 from 5 Asian cohorts. We identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. Combined analysis identified 8 additional associated loci. The new loci include candidate genes with functions in neurotransmission (GRIA4), ion transport (KCNQ5), retinoic acid metabolism (RDH5), extracellular matrix remodeling (LAMA2 and BMP2) and eye development (SIX6 and PRSS56). We also confirmed previously reported associations with GJD2 and RASGRF1. Risk score analysis using associated SNPs showed a tenfold increased risk of myopia for individuals carrying the highest genetic load. Our results, based on a large meta-analysis across independent multiancestry studies, considerably advance understanding of the mechanisms involved in refractive error and myopia.
  • Verkerk, A., & Frostad, B. H. (2013). The encoding of manner predications and resultatives in Oceanic: A typological and historical overview. Oceanic Linguistics, 52, 1-35. doi:10.1353/ol.2013.0010.

    Abstract

    This paper is concerned with the encoding of resultatives and manner predications in Oceanic languages. Our point of departure is a typological overview of the encoding strategies and their geographical distribution, and we investigate their historical traits by the use of phylogenetic comparative methods. A full theory of the historical pathways is not always accessible for all the attested encoding strategies, given the data available for this study. However, tentative theories about the development and origin of the attested strategies are given. One of the most frequent strategy types used to encode both manner predications and resultatives has been given special emphasis. This is a construction in which a reex form of the Proto-Oceanic causative *pa-/*paka- modies the second verb in serial verb constructions

    Additional information

    52.1.verkerk_supp01.pdf
  • Verkerk, A. (2013). Scramble, scurry and dash: The correlation between motion event encoding and manner verb lexicon size in Indo-European. Language Dynamics and Change, 3, 169-217. doi:10.1163/22105832-13030202.

    Abstract

    In recent decades, much has been discovered about the different ways in which people can talk about motion (Talmy, 1985, 1991; Slobin, 1996, 1997, 2004). Slobin (1997) has suggested that satellite-framed languages typically have a larger and more diverse lexicon of manner of motion verbs (such as run, fly, and scramble) when compared to verb-framed languages. Slobin (2004) has claimed that larger manner of motion verb lexicons originate over time because codability factors increase the accessibility of manner in satellite-framed languages. In this paper I investigate the dependency between the use of the satellite-framed encoding construction and the size of the manner verb lexicon. The data used come from 20 Indo-European languages. The methodology applied is a range of phylogenetic comparative methods adopted from biology, which allow for an investigation of this dependency while taking into account the shared history between these 20 languages. The results provide evidence that Slobin’s hypothesis was correct, and indeed there seems to be a relationship between the use of the satellite-framed construction and the size of the manner verb lexicon
  • von Stutterheim, C., Andermann, M., Carroll, M., Flecken, M., & Schmiedtova, B. (2012). How grammaticized concepts shape event conceptualization in language production: Insights from linguistic analysis, eye tracking data, and memory performance. Linguistics, 50(4), 833-867. doi:10.1515/ling-2012-0026.

    Abstract

    The role of grammatical systems in profiling particular conceptual categories is used as a key in exploring questions concerning language specificity during the conceptualization phase in language production. This study focuses on the extent to which crosslinguistic differences in the concepts profiled by grammatical means in the domain of temporality (grammatical aspect) affect event conceptualization and distribution of attention when talking about motion events. The analyses, which cover native speakers of Standard Arabic, Czech, Dutch, English, German, Russian and Spanish, not only involve linguistic evidence, but also data from an eye tracking experiment and a memory test. The findings show that direction of attention to particular parts of motion events varies to some extent with the existence of grammaticized means to express imperfective/progressive aspect. Speakers of languages that do not have grammaticized aspect of this type are more likely to take a holistic view when talking about motion events and attend to as well as refer to endpoints of motion events, in contrast to speakers of aspect languages.

    Files private

    Request files
  • von Stutterheim, C., Flecken, M., & Carroll, M. (2013). Introduction: Conceptualizing in a second language. International Review of Applied Linguistics in Language Teaching, 51(2), 77-85. doi:10.1515/iral-2013-0004.
  • von Stutterheim, C., & Flecken, M. (Eds.). (2013). Principles of information organization in L2 discourse [Special Issue]. International Review of Applied linguistics in Language Teaching (IRAL), 51(2).
  • De Vos, C., & Palfreyman, N. (2012). [Review of the book Deaf around the World: The impact of language / ed. by Mathur & Napoli]. Journal of Linguistics, 48, 731 -735.

    Abstract

    First paragraph. Since its advent half a century ago, the field of sign language linguistics has had close ties to education and the empowerment of deaf communities, a union that is fittingly celebrated by Deaf around the world: The impact of language. With this fruitful relationship in mind, sign language researchers and deaf educators gathered in Philadelphia in 2008, and in the volume under review, Gaurav Mathur & Donna Jo Napoli (henceforth M&N) present a selection of papers from this conference, organised in two parts: ‘Sign languages: Creation, context, form’, and ‘Social issues/civil rights ’. Each of the chapters is accompanied by a response chapter on the same or a related topic. The first part of the volume focuses on the linguistics of sign languages and includes papers on the impact of language modality on morphosyntax, second language acquisition, and grammaticalisation, highlighting the fine balance that sign linguists need to strike when conducting methodologically sound research. The second part of the book includes accounts by deaf activists from countries including China, India, Japan, Kenya, South Africa and Sweden who are considered prominent figures in areas such as deaf education, politics, culture and international development.
  • De Vos, C. (2013). Sign-Spatiality in Kata Kolok: How a village sign language of Bali inscribes its signing space [Dissertation abstract]. Sign Language & Linguistics, 16(2), 277-284. doi:10.1075/sll.16.2.08vos.
  • De Vriend, F., Broeder, D., Depoorter, G., van Eerten, L., & Van Uytvanck, D. (2013). Creating & Testing CLARIN Metadata Components. Language Resources and Evaluation, 47(4), 1315-1326. doi:10.1007/s10579-013-9231-6.

    Abstract

    The CLARIN Metadata Infrastructure (CMDI) that is being developed in Common Language Resources and Technology Infrastructure (CLARIN) is a computer-supported framework that combines a flexible component approach with the explicit declaration of semantics. The goal of the Dutch CLARIN project “Creating & Testing CLARIN Metadata Components” was to create metadata components and profiles for a wide variety of existing resources housed at two data centres according to the CMDI specifications. In doing so the principles of the framework were tested. The results of the project are of benefit to other CLARIN-projects that are expected to adhere to the CMDI framework and its accompanying tools.
  • De Vries, M. H., Petersson, K. M., Geukes, S., Zwitserlood, P., & Christiansen, M. H. (2012). Processing multiple non-adjacent dependencies: Evidence from sequence learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367, 2065-2076. doi:10.1098/rstb.2011.0414.

    Abstract

    Processing non-adjacent dependencies is considered to be one of the hallmarks of human language. Assuming that sequence-learning tasks provide a useful way to tap natural-language-processing mechanisms, we cross-modally combined serial reaction time and artificial-grammar learning paradigms to investigate the processing of multiple nested (A1A2A3B3B2B1) and crossed dependencies (A1A2A3B1B2B3), containing either three or two dependencies. Both reaction times and prediction errors highlighted problems with processing the middle dependency in nested structures (A1A2A3B3_B1), reminiscent of the ‘missing-verb effect’ observed in English and French, but not with crossed structures (A1A2A3B1_B3). Prior linguistic experience did not play a major role: native speakers of German and Dutch—which permit nested and crossed dependencies, respectively—showed a similar pattern of results for sequences with three dependencies. As for sequences with two dependencies, reaction times and prediction errors were similar for both nested and crossed dependencies. The results suggest that constraints on the processing of multiple non-adjacent dependencies are determined by the specific ordering of the non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved (i.e. two or three). Furthermore, these constraints may not be specific to language but instead derive from limitations on structured sequence learning.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., Hagoort, P., & Verhoeven, L. (2012). A neurocognitive perspective on rhyme awareness: The N450 rhyme effect. Brain Research, 1483, 63-70. doi:10.1016/j.brainres.2012.09.018.

    Abstract

    Rhyme processing is reflected in the electrophysiological signals of the brain as a negative deflection for non-rhyming as compared to rhyming stimuli around 450 ms after stimulus onset. Studies have shown that this N450 component is not solely sensitive to rhyme but also responds to other types of phonological overlap. In the present study, we examined whether the N450 component can be used to gain insight into the global similarity effect, indicating that rhyme judgment skills decrease when participants are presented with word pairs that share a phonological overlap but do not rhyme (e.g., bell–ball). We presented 20 adults with auditory rhyming, globally similar overlapping and unrelated word pairs. In addition to measuring behavioral responses by means of a yes/no button press, we also took EEG measures. The behavioral data showed a clear global similarity effect; participants judged overlapping pairs more slowly than unrelated pairs. However, the neural outcomes did not provide evidence that the N450 effect responds differentially to globally similar and unrelated word pairs, suggesting that globally similar and dissimilar non-rhyming pairs are processed in a similar fashion at the stage of early lexical access.
  • Wagensveld, B., Segers, E., Van Alphen, P. M., & Verhoeven, L. (2013). The role of lexical representations and phonological overlap in rhyme judgments of beginning, intermediate and advanced readers. Learning and Individual Differences, 23, 64-71. doi:10.1016/j.lindif.2012.09.007.

    Abstract

    Studies have shown that prereaders find globally similar non-rhyming pairs (i.e., bell–ball) difficult to judge. Although this effect has been explained as a result of ill-defined lexical representations, others have suggested that it is part of an innate tendency to respond to phonological overlap. In the present study we examined this effect over time. Beginning, intermediate and advanced readers were presented with a rhyme judgment task containing rhyming, phonologically similar, and unrelated non-rhyming pairs. To examine the role of lexical representations, participants were presented with both words and pseudowords. Outcomes showed that pseudoword processing was difficult for children but not for adults. The global similarity effect was present in both children and adults. The findings imply that holistic representations cannot explain the incapacity to ignore similarity relations during rhyming. Instead, the data provide more evidence for the idea that global similarity processing is part of a more fundamental innate phonological processing capacity.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., & Verhoeven, L. (2012). The nature of rhyme processing in preliterate children. British Journal of Educational Psychology, 82, 672-689. doi:10.1111/j.2044-8279.2011.02055.x.

    Abstract

    Background. Rhyme awareness is one of the earliest forms of phonological awareness to develop and is assessed in many developmental studies by means of a simple rhyme task. The influence of more demanding experimental paradigms on rhyme judgment performance is often neglected. Addressing this issue may also shed light on whether rhyme processing is more global or analytical in nature. Aims. The aim of the present study was to examine whether lexical status and global similarity relations influenced rhyme judgments in kindergarten children and if so, if there is an interaction between these two factors. Sample. Participants were 41 monolingual Dutch-speaking preliterate kindergartners (average age 6.0 years) who had not yet received any formal reading education. Method. To examine the effects of lexical status and phonological similarity processing, the kindergartners were asked to make rhyme judgements on (pseudo) word targets that rhymed, phonologically overlapped or were unrelated to (pseudo) word primes. Results. Both a lexicality effect (pseudo-words were more difficult than words) and a global similarity effect (globally similar non-rhyming items were more difficult to reject than unrelated items) were observed. In addition, whereas in words the global similarity effect was only present in accuracy outcomes, in pseudo-words it was also observed in the response latencies. Furthermore, a large global similarity effect in pseudo-words correlated with a low score on short-term memory skills and grapheme knowledge. Conclusions. Increasing task demands led to a more detailed assessment of rhyme processing skills. Current assessment paradigms should therefore be extended with more demanding conditions. In light of the views on rhyme processing, we propose that a combination of global and analytical strategies is used to make a correct rhyme judgment.
  • Wagensveld, B., Van Alphen, P. M., Segers, E., Hagoort, P., & Verhoeven, L. (2013). The neural correlates of rhyme awareness in preliterate and literate children. Clinical Neurophysiology, 124, 1336-1345. doi:10.1016/j.clinph.2013.01.022.

    Abstract

    Objective Most rhyme awareness assessments do not encompass measures of the global similarity effect (i.e., children who are able to perform simple rhyme judgments get confused when presented with globally similar non-rhyming pairs). The present study examines the neural nature of this effect by studying the N450 rhyme effect. Methods Behavioral and electrophysiological responses of Dutch pre-literate kindergartners and literate second graders were recorded while they made rhyme judgments of word pairs in three conditions; phonologically rhyming (e.g., wijn-pijn), overlapping non-rhyming (e.g., pen-pijn) and unrelated non-rhyming pairs (e.g., boom-pijn). Results Behaviorally, both groups had difficulty judging overlapping but not rhyming and unrelated pairs. The neural data of second graders showed overlapping pairs were processed in a similar fashion as unrelated pairs; both showed a more negative deflection of the N450 component than rhyming items. Kindergartners did not show a typical N450 rhyme effect. However, some other interesting ERP differences were observed, indicating preliterates are sensitive to rhyme at a certain level. Significance Rhyme judgments of globally similar items rely on the same process as rhyme judgments of rhyming and unrelated items. Therefore, incorporating a globally similar condition in rhyme assessments may lead to a more in-depth measure of early phonological awareness skills. Highlights Behavioral and electrophysiological responses were recorded while (pre)literate children made rhyme judgments of rhyming, overlapping and unrelated words. Behaviorally both groups had difficulty judging overlapping pairs as non-rhyming while overlapping and unrelated neural patterns were similar in literates. Preliterates show a different pattern indicating a developing phonological system.
  • Wagner, A. (2013). Cross-language similarities and differences in the uptake of place information. Journal of the Acoustical Society of America, 133, 4256-4267. doi:10.1121/1.4802904.

    Abstract

    Cross-language differences in the use of coarticulatory cues for the identification of fricatives have been demonstrated in a phoneme detection task: Listeners with perceptually similar fricative pairs in their native phoneme inventories (English, Polish, Spanish) relied more on cues from vowels than listeners with perceptually more distinct fricative contrasts (Dutch and German). The present gating study further investigated these cross-language differences and addressed three questions. (1) Are there cross-language differences in informativeness of parts of the speech signal regarding place of articulation for fricative identification? (2) Are such cross-language differences fricative-specific, or do they extend to the perception of place of articulation for plosives? (3) Is such language-specific uptake of information based on cues preceding or following the consonantal constriction? Dutch, Italian, Polish, and Spanish listeners identified fricatives and plosives in gated CV and VC syllables. The results showed cross-language differences in the informativeness of coarticulatory cues for fricative identification: Spanish and Polish listeners extracted place of articulation information from shorter portions of VC syllables. No language-specific differences were found for plosives, suggesting that greater reliance on coarticulatory cues did not generalize to other phoneme types. The language-specific differences for fricatives were based on coarticulatory cues into the consonant.
  • Walker, R. M., Hill, A. E., Newman, A. C., Hamilton, G., Torrance, H. S., Anderson, S. M., Ogawa, F., Derizioti, P., Nicod, J., Vernes, S. C., Fisher, S. E., Thomson, P. A., Porteous, D. J., & Evans, K. L. (2012). The DISC1 promoter: Characterization and regulation by FOXP2. Human Molecular Genetics, 21, 2862-2872. doi:10.1093/hmg/dds111.

    Abstract

    Disrupted in schizophrenia 1 (DISC1) is a leading candidate susceptibility gene for schizophrenia, bipolar disorder, and recurrent major depression, which has been implicated in other psychiatric illnesses of neurodevelopmental origin, including autism. DISC1 was initially identified at the breakpoint of a balanced chromosomal translocation, t(1;11) (q42.1;14.3), in a family with a high incidence of psychiatric illness. Carriers of the translocation show a 50% reduction in DISC1 protein levels, suggesting altered DISC1 expression as a pathogenic mechanism in psychiatric illness. Altered DISC1 expression in the post-mortem brains of individuals with psychiatric illness and the frequent implication of non-coding regions of the gene by association analysis further support this assertion. Here, we provide the first characterisation of the DISC1 promoter region. Using dual luciferase assays, we demonstrate that a region -300bp to -177bp relative to the transcription start site (TSS) contributes positively to DISC1 promoter activity, whilst a region -982bp to -301bp relative to the TSS confers a repressive effect. We further demonstrate inhibition of DISC1 promoter activity and protein expression by FOXP2, a transcription factor implicated in speech and language function. This inhibition is diminished by two distinct FOXP2 point mutations, R553H and R328X, which were previously found in families affected by developmental verbal dyspraxia (DVD). Our work identifies an intriguing mechanistic link between neurodevelopmental disorders that have traditionally been viewed as diagnostically distinct but which do share varying degrees of phenotypic overlap.
  • Walters, J., Rujescu, D., Franke, B., Giegling, I., Vasquez, A., Hargreaves, A., Russo, G., Morris, D., Hoogman, M., Da Costa, A., Moskvina, V., Fernandez, G., Gill, M., Corvin, A., O'Donovan, M., Donohoe, G., & Owen, M. (2013). The role of the major histocompatibility complex region in cognition and brain structure: A schizophrenia GWAS follow-up. American Journal of Psychiatry, 170, 877-885. doi:10.1176/appi.ajp.2013.12020226.

    Abstract

    Objective The authors investigated the effects of recently identified genome-wide significant schizophrenia genetic risk variants on cognition and brain structure. Method A panel of six single-nucleotide polymorphisms (SNPs) was selected to represent genome-wide significant loci from three recent genome-wide association studies (GWAS) for schizophrenia and was tested for association with cognitive measures in 346 patients with schizophrenia and 2,342 healthy comparison subjects. Nominally significant results were evaluated for replication in an independent case-control sample. For SNPs showing evidence of association with cognition, associations with brain structural volumes were investigated in a large independent healthy comparison sample. Results Five of the six SNPs showed no significant association with any cognitive measure. One marker in the major histocompatibility complex (MHC) region, rs6904071, showed independent, replicated evidence of association with delayed episodic memory and was significant when both samples were combined. In the combined sample of up to 3,100 individuals, this SNP was associated with widespread effects across cognitive domains, although these additional associations were no longer significant after adjusting for delayed episodic memory. In the large independent structural imaging sample, the same SNP was also associated with decreased hippocampal volume. Conclusions The authors identified a SNP in the MHC region that was associated with cognitive performance in patients with schizophrenia and healthy comparison subjects. This SNP, rs6904071, showed a replicated association with episodic memory and hippocampal volume. These findings implicate the MHC region in hippocampal structure and functioning, consistent with the role of MHC proteins in synaptic development and function. Follow-up of these results has the potential to provide insights into the pathophysiology of schizophrenia and cognition.

    Additional information

    Hoogman_2013_JourAmePsy.supp.pdf
  • Wang, L., Jensen, O., Van den Brink, D., Weder, N., Schoffelen, J.-M., Magyari, L., Hagoort, P., & Bastiaansen, M. C. M. (2012). Beta oscillations relate to the N400m during language comprehension. Human Brain Mapping, 33, 2898-2912. doi:10.1002/hbm.21410.

    Abstract

    The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time–frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2013). ERP evidence on the interaction between information structure and emotional salience of words. Cognitive, Affective and Behavioral Neuroscience, 13, 297-310. doi:10.3758/s13415-012-0146-2.

    Abstract

    Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question–answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90–200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention–emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words.
  • Wang, L., Bastiaansen, M. C. M., Yang, Y., & Hagoort, P. (2012). Information structure influences depth of syntactic processing: Event-related potential evidence for the Chomsky illusion. PLoS One, 7(10), e47917. doi:10.1371/journal.pone.0047917.

    Abstract

    Information structure facilitates communication between interlocutors by highlighting relevant information. It has previously been shown that information structure modulates the depth of semantic processing. Here we used event-related potentials to investigate whether information structure can modulate the depth of syntactic processing. In question-answer pairs, subtle (number agreement) or salient (phrase structure) syntactic violations were placed either in focus or out of focus through information structure marking. P600 effects to these violations reflect the depth of syntactic processing. For subtle violations, a P600 effect was observed in the focus condition, but not in the non-focus condition. For salient violations, comparable P600 effects were found in both conditions. These results indicate that information structure can modulate the depth of syntactic processing, but that this effect depends on the salience of the information. When subtle violations are not in focus, they are processed less elaborately. We label this phenomenon the Chomsky illusion.
  • Wang, L., Zhu, Z., & Bastiaansen, M. C. M. (2012). Integration or predictability? A further specification of the functional role of gamma oscillations in language comprehension. Frontiers in Psychology, 3, 187. doi:10.3389/fpsyg.2012.00187.

    Abstract

    Gamma-band neuronal synchronization during sentence-level language comprehension has previously been linked with semantic unification. Here, we attempt to further narrow down the functional significance of gamma during language comprehension, by distinguishing between two aspects of semantic unification: successful integration of word meaning into the sentence context, and prediction of upcoming words. We computed event-related potentials (ERPs) and frequency band-specific electroencephalographic (EEG) power changes while participants read sentences that contained a critical word (CW) that was (1) both semantically congruent and predictable (high cloze, HC), (2) semantically congruent but unpredictable (low cloze, LC), or (3) semantically incongruent (and therefore also unpredictable; semantic violation, SV). The ERP analysis showed the expected parametric N400 modulation (HC < LC < SV). The time-frequency analysis showed qualitatively different results. In the gamma-frequency range, we observed a power increase in response to the CW in the HC condition, but not in the LC and the SV conditions. Additionally, in the theta frequency range we observed a power increase in the SV condition only. Our data provide evidence that gamma power increases are related to the predictability of an upcoming word based on the preceding sentence context, rather than to the integration of the incoming word’s semantics into the preceding context. Further, our theta band data are compatible with the notion that theta band synchronization in sentence comprehension might be related to the detection of an error in the language input.
  • Wang, L., Zhu, Z., Bastiaansen, M. C. M., Hagoort, P., & Yang, Y. (2013). Recognizing the emotional valence of names: An ERP study. Brain and Language, 125, 118-127. doi:10.1016/j.bandl.2013.01.006.

    Abstract

    Unlike common nouns, person names refer to unique entities and generally have a referring function. We used event-related potentials to investigate the time course of identifying the emotional meaning of nouns and names. The emotional valence of names and nouns were manipulated separately. The results show early N1 effects in response to emotional valence only for nouns. This might reflect automatic attention directed towards emotional stimuli. The absence of such an effect for names supports the notion that the emotional meaning carried by names is accessed after word recognition and person identification. In addition, both names with negative valence and emotional nouns elicited late positive effects, which have been associated with evaluation of emotional significance. This positive effect started earlier for nouns than for names, but with similar durations. Our results suggest that distinct neural systems are involved in the retrieval of names’ and nouns’ emotional meaning.
  • Wang, L., & Chu, M. (2013). The role of beat gesture and pitch accent in semantic processing: An ERP study. Neuropsychologia, 51(13), 2847-2855. doi:10.1016/j.neuropsychologia.2013.09.027.

    Abstract

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently.
  • Warmelink, L., Vrij, A., Mann, S., Leal, S., & Poletiek, F. H. (2013). The effects of unexpected questions on detecting familiar and unfamiliar lies. Psychiatry, Psychology and law, 20(1), 29-35. doi:10.1080/13218719.2011.619058.

    Abstract

    Previous research suggests that lie detection can be improved by asking the interviewee unexpected questions. The present experiment investigates the effect of two types of unexpected questions: background questions and detail questions, on detecting lies about topics with which the interviewee is (a) familiar or (b) unfamiliar. In this experiment, 66 participants read interviews in which interviewees answered background or detail questions, either truthfully or deceptively. Those who answered deceptively could be lying about a topic they were familiar with or about a topic they were unfamiliar with. The participants were asked to judge whether the interviewees were lying. The results revealed that background questions distinguished truths from both types of lies, while the detail questions distinguished truths from unfamiliar lies, but not from familiar lies. The implications of these findings are discussed.
  • Weber, A., & Scharenborg, O. (2012). Models of spoken-word recognition. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 387-401. doi:10.1002/wcs.1178.

    Abstract

    All words of the languages we know are stored in the mental lexicon. Psycholinguistic models describe in which format lexical knowledge is stored and how it is accessed when needed for language use. The present article summarizes key findings in spoken-word recognition by humans and describes how models of spoken-word recognition account for them. Although current models of spoken-word recognition differ considerably in the details of implementation, there is general consensus among them on at least three aspects: multiple word candidates are activated in parallel as a word is being heard, activation of word candidates varies with the degree of match between the speech signal and stored lexical representations, and activated candidate words compete for recognition. No consensus has been reached on other aspects such as the flow of information between different processing levels, and the format of stored prelexical and lexical representations. WIREs Cogn Sci 2012
  • Weber, A., & Crocker, M. W. (2012). On the nature of semantic constraints on lexical access. Journal of Psycholinguistic Research, 41, 195-214. doi:10.1007/s10936-011-9184-0.

    Abstract

    We present two eye-tracking experiments that investigate lexical frequency and semantic context constraints in spoken-word recognition in German. In both experiments, the pivotal words were pairs of nouns overlapping at onset but varying in lexical frequency. In Experiment 1, German listeners showed an expected frequency bias towards high-frequency competitors (e.g., Blume, ‘flower’) when instructed to click on low-frequency targets (e.g., Bluse, ‘blouse’). In Experiment 2, semantically constraining context increased the availability of appropriate low-frequency target words prior to word onset, but did not influence the availability of semantically inappropriate high-frequency competitors at the same time. Immediately after target word onset, however, the activation of high-frequency competitors was reduced in semantically constraining sentences, but still exceeded that of unrelated distractor words significantly. The results suggest that (1) semantic context acts to downgrade activation of inappropriate competitors rather than to exclude them from competition, and (2) semantic context influences spoken-word recognition, over and above anticipation of upcoming referents.
  • Whitehouse, A. J., Bishop, D. V., Ang, Q., Pennell, C. E., & Fisher, S. E. (2012). Corrigendum to CNTNAP2 variants affect early language development in the general population. Genes, Brain and Behavior, 11, 501. doi:10.1111/j.1601-183X.2012.00806.x.

    Abstract

    Corrigendum to CNTNAP2 variants affect early language development in the general population A. J. O. Whitehouse, D. V. M. Bishop, Q. W. Ang, C. E. Pennell and S. E. Fisher Genes Brain Behav (2011) doi: 10.1111/j.1601-183X.2011.00684.x. The authors have detected a typographical error in the Abstract of this paper. The error is in the fifth sentence, which reads: ‘‘On the basis of these findings, we performed analyses of four-marker haplotypes of rs2710102–rs759178–rs17236239–rs2538976 and identified significant association (haplotype TTAA, P = 0.049; haplotype GCAG,P = .0014).’’ Rather than ‘‘GCAG’’, the final haplotype should read ‘‘CGAG’’. This typographical error was made in the Abstract only and this has no bearing on the results or conclusions of the study, which remain unchanged. Reference Whitehouse, A. J. O., Bishop, D. V. M., Ang, Q. W., Pennell, C. E. & Fisher, S. E. (2011) CNTNAP2 variants affect early language development in the general population. Genes Brain Behav 10, 451–456. doi: 10.1111/j.1601-183X.2011.00684.x.
  • Whitehouse, H., & Cohen, E. (2012). Seeking a rapprochement between anthropology and the cognitive sciences: A problem-driven approach. Topics in Cognitive Science, 4, 404-412. doi:10.1111/j.1756-8765.2012.01203.x.

    Abstract

    Beller, Bender, and Medin question the necessity of including social anthropology within the cognitive sciences. We argue that there is great scope for fruitful rapprochement while agreeing that there are obstacles (even if we might wish to debate some of those specifically identified by Beller and colleagues). We frame the general problem differently, however: not in terms of the problem of reconciling disciplines and research cultures, but rather in terms of the prospects for collaborative deployment of expertise (methodological and theoretical) in problem-driven research. For the purposes of illustration, our focus in this article is on the evolution of cooperation
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Willems, R. M. (2013). Can literary studies contribute to cognitive neuroscience? Journal of literary semantics, 42(2), 217-222. doi:10.1515/jls-2013-0011.
  • Willems, R. M., & Francken, J. C. (2012). Embodied cognition: Taking the next step. Frontiers in Psychology, 3, 582. doi:10.3389/fpsyg.2012.00582.

    Abstract

    Recent years have seen a large amount of empirical studies related to ‘embodied cognition’. While interesting and valuable, there is something dissatisfying with the current state of affairs in this research domain. Hypotheses tend to be underspecified, testing in general terms for embodied versus disembodied processing. The lack of specificity of current hypotheses can easily lead to an erosion of the embodiment concept, and result in a situation in which essentially any effect is taken as positive evidence. Such erosion is not helpful to the field and does not do justice to the importance of embodiment. Here we want to take stock, and formulate directions for how it can be studied in a more fruitful fashion. As an example we will describe few example studies that have investigated the role of sensori-motor systems in the coding of meaning (‘embodied semantics’). Instead of focusing on the dichotomy between embodied and disembodied theories, we suggest that the field move forward and ask how and when sensori-motor systems and behavior are involved in cognition.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Wright, S. E., & Windhouwer, M. (2013). ISOcat - im Reich der Datenkategorien. eDITion: Fachzeitschrift für Terminologie, 9(1), 8-12.

    Abstract

    Im ISOcat-Datenkategorie-Register (Data Category Registry, www.isocat.org) des Technischen Komitees ISO/TC 37 (Terminology and other language and content resources) werden Feldnamen und Werte für Sprachressourcen beschrieben. Empfohlene Feldnamen und zuverlässige Definitionen sollen dazu beitragen, dass Sprachdaten unabhängig von Anwendungen, Plattformen und Communities of Practice (CoP) wiederverwendet werden können. Datenkategorie-Gruppen (Data Category Selections) können eingesehen, ausgedruckt, exportiert und nach kostenloser Registrierung auch neu erstellt werden.
  • Xiang, H., Dediu, D., Roberts, L., Van Oort, E., Norris, D., & Hagoort, P. (2012). The structural connectivity underpinning language aptitude, working memory and IQ in the perisylvian language network. Language Learning, 62(Supplement S2), 110-130. doi:10.1111/j.1467-9922.2012.00708.x.

    Abstract

    We carried out the first study on the relationship between individual language aptitude and structural connectivity of language pathways in the adult brain. We measured four components of language aptitude (vocabulary learning, VocL; sound recognition, SndRec; sound-symbol correspondence, SndSym; and grammatical inferencing, GrInf) using the LLAMA language aptitude test (Meara, 2005). Spatial working memory (SWM), verbal working memory (VWM) and IQ were also measured as control factors. Diffusion Tensor Imaging (DTI) was employed to investigate the structural connectivity of language pathways in the perisylvian language network. Principal Component Analysis (PCA) on behavioural measures suggests that a general ability might be important to the first stages of L2 acquisition. It also suggested that VocL, SndSy and SWM are more closely related to general IQ than SndRec and VocL, and distinguished the tasks specifically designed to tap into L2 acquisition (VocL, SndRec,SndSym and GrInf) from more generic measures (IQ, SWM and VWM). Regression analysis suggested significant correlations between most of these behavioural measures and the structural connectivity of certain language pathways, i.e., VocL and BA47-Parietal pathway, SndSym and inter-hemispheric BA45 pathway, GrInf and BA45-Temporal pathway and BA6-Temporal pathway, IQ and BA44-Parietal pathway, BA47-Parietal pathway, BA47-Temporal pathway and inter-hemispheric BA45 pathway, SWM and inter-hemispheric BA6 pathway and BA47-Parietal pathway, and VWM and BA47-Temporal pathway. These results are discussed in relation to relevant findings in the literature.
  • You, W., Zhang, Q., & Verdonschot, R. G. (2012). Masked syllable priming effects in word and picture naming in Chinese. PLoS One, 7(10): e46595. doi:10.1371/journal.pone.0046595.

    Abstract

    Four experiments investigated the role of the syllable in Chinese spoken word production. Chen, Chen and Ferrand (2003) reported a syllable priming effect when primes and targets shared the first syllable using a masked priming paradigm in Chinese. Our Experiment 1 was a direct replication of Chen et al.'s (2003) Experiment 3 employing CV (e. g., /ba2.ying2/, strike camp) and CVG (e. g., /bai2.shou3/, white haired) syllable types. Experiment 2 tested the syllable priming effect using different syllable types: e. g., CV (/qi4.qiu2/, balloon) and CVN (/qing1.ting2/, dragonfly). Experiment 3 investigated this issue further using line drawings of common objects as targets that were preceded either by a CV (e. g., /qi3/, attempt), or a CVN (e. g., /qing2/, affection) prime. Experiment 4 further examined the priming effect by a comparison between CV or CVN priming and an unrelated priming condition using CV-NX (e. g., /mi2.ni3/, mini) and CVN-CX (e. g., /min2.ju1/, dwellings) as target words. These four experiments consistently found that CV targets were named faster when preceded by CV primes than when they were preceded by CVG, CVN or unrelated primes, whereas CVG or CVN targets showed the reverse pattern. These results indicate that the priming effect critically depends on the match between the structure of the prime and that of the first syllable of the target. The effect obtained in this study was consistent across different stimuli and different tasks (word and picture naming), and provides more conclusive and consistent data regarding the role of the syllable in Chinese speech production.
  • Zeshan, U., Escobedo Delgado, C. E., Dikyuva, H., Panda, S., & De Vos, C. (2013). Cardinal numerals in rural sign languages: Approaching cross-modal typology. Linguistic Typology, 17(3), 357-396. doi:10.1515/lity-2013-0019.

    Abstract

    This article presents data on cardinal numerals in three sign languages from small-scale communities with hereditary deafness. The unusual features found in these data considerably extend the known range of typological variety across sign languages. Some features, such as non-decimal numeral bases, are unattested in sign languages, but familiar from spoken languages, while others, such as subtractive sub-systems, are rare in sign and speech. We conclude that for a complete typological appraisal of a domain, an approach to cross-modal typology, which includes a typologically diverse range of sign languages in addition to spoken languages, is both instructive and feasible.
  • Zhu, Z., Hagoort, P., Zhang, J. X., Feng, G., Chen, H.-C., Bastiaansen, M. C. M., & Wang, S. (2012). The anterior left inferior frontal gyrus contributes to semantic unification. NeuroImage, 60, 2230-2237. doi:10.1016/j.neuroimage.2012.02.036.

    Abstract

    Semantic unification, the process by which small blocks of semantic information are combined into a coherent utterance, has been studied with various types of tasks. However, whether the brain activations reported in these studies are attributed to semantic unification per se or to other task-induced concomitant processes still remains unclear. The neural basis for semantic unification in sentence comprehension was examined using event-related potentials (ERP) and functional Magnetic Resonance Imaging (fMRI). The semantic unification load was manipulated by varying the goodness of fit between a critical word and its preceding context (in high cloze, low cloze and violation sentences). The sentences were presented in a serial visual presentation mode. The participants were asked to perform one of three tasks: semantic congruency judgment (SEM), silent reading for comprehension (READ), or font size judgment (FONT), in separate sessions. The ERP results showed a similar N400 amplitude modulation by the semantic unification load across all of the three tasks. The brain activations associated with the semantic unification load were found in the anterior left inferior frontal gyrus (aLIFG) in the FONT task and in a widespread set of regions in the other two tasks. These results suggest that the aLIFG activation reflects a semantic unification, which is different from other brain activations that may reflect task-specific strategic processing.

    Additional information

    Zhu_2012_suppl.dot
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwaan, R. A., Van der Stoep, N., Guadalupe, T., & Bouwmeester, S. (2012). Language comprehension in the balance: The robustness of the action-compatibility effect (ACE). PLoS One, 7(2), e31204. doi:10.1371/journal.pone.0031204.

    Abstract

    How does language comprehension interact with motor activity? We investigated the conditions under which comprehending an action sentence affects people's balance. We performed two experiments to assess whether sentences describing forward or backward movement modulate the lateral movements made by subjects who made sensibility judgments about the sentences. In one experiment subjects were standing on a balance board and in the other they were seated on a balance board that was mounted on a chair. This allowed us to investigate whether the action compatibility effect (ACE) is robust and persists in the face of salient incompatibilities between sentence content and subject movement. Growth-curve analysis of the movement trajectories produced by the subjects in response to the sentences suggests that the ACE is indeed robust. Sentence content influenced movement trajectory despite salient inconsistencies between implied and actual movement. These results are interpreted in the context of the current discussion of embodied, or grounded, language comprehension and meaning representation.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.

    Abstract

    This paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.

Share this page